text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Linux KVM as a Learning Tool
the guest will exit from guest mode, and the configured outb() callback function is called in user mode (with values 0xf1 and 0x0a for its second and third parameters, respectively).
Initially, use dummy callbacks. Create and reference them in a variable called my_callbacks, as shown in Listing 2. Most field names are self-explanatory, but for a brief description of each of them, refer to the comments in the structure definition in libkvm.h.
Listing 2. I/O Callbacks (used in launcher.c)
static int my_inb(void *opaque, int16_t addr, uint8_t *data) { puts ("inb"); return 0; } static int my_inw(void *opaque, uint16_t addr, uint16_t *data) { puts ("inw"); return 0; } static int my_inl(void *opaque, uint16_t addr, uint32_t *data) { puts ("inl"); return 0; } static int my_outb(void *opaque, uint16_t addr, uint8_t data) { puts ("outb"); return 0; } static int my_outw(void *opaque, uint16_t addr, uint16_t data) { puts ("outw"); return 0; } static int my_outl (void *opaque, uint16_t addr, uint32_t data) { puts ("outl"); return 0; } static int my_pre_kvm_run(void *opaque, int vcpu) { return 0; } ... and similar for my_mmio_read, my_mmio_write, my_debug, my_halt, my_shutdown, my_io_window, my_try_push_interrupts, my_try_push_nmi, my_post_kvm_run, and my_tpr_access static struct kvm_callbacks my_callbacks = { .inb = my_inb, .inw = my_inw, .inl = my_inl, .outb = my_outb, .outw = my_outw, .outl = my_outl, .mmio_read = my_mmio_read, .mmio_write = my_mmio_write, .debug = my_debug, .halt = my_halt, .io_window = my_io_window, .try_push_interrupts = my_try_push_interrupts, .try_push_nmi = my_try_push_nmi, // added in kvm-77 .post_kvm_run = my_post_kvm_run, .pre_kvm_run = my_pre_kvm_run, .tpr_access = my_tpr_access };
To create the virtual machine itself, use kvm_create(), whose second argument is the amount of RAM in bytes desired for it, and the third argument is the address of a location that will in turn contain the address of the beginning of the memory space reserved for the virtual machine (the “guest memory” box in Figure 1). Note that kvm_create() does not allocate memory for the virtual machine.
To create the first virtual CPU, use kvm_create_vcpu() with a value of 0 for the slot parameter—versions less than 65 create the first virtual CPU during the call to kvm_create().
There are several methods to allocate memory for the virtual machine—for example, kvm_create_phys_mem(). The second argument of kvm_create_phys_mem() is the starting physical address of the requested region in the guest memory (in the pseudo-“physical memory” of the virtual machine, not in the physical memory of the host). The third argument is the length, in bytes, of the region. The fourth indicates whether dirty page logging should be activated in the requested region, and the fifth argument indicates whether the pages may be written. On success, it returns the location of the allocated memory area as an address in the virtual address space of the calling process.
Invoke the functions of Listing 1 within the same KVM context to create your first virtual machine, and execute it with kvm_run(). This function will return only if an I/O handler pointed in my_callbacks returns a nonzero value or an exception occurs that neither the guest OS nor KVM can handle.
Listing 3 contains the code for the launcher, including the load_file() function to copy the guest kernel image from a file to the virtual machine's memory space. Why is this image copied at offset 0xf0000 of the guest's memory space? Because of the way real-mode works, as explained in the next section.
Listing 3. Our First Virtual Machine Launcher (launcher.c)
#include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <libkvm.h> /* callback definitions as shown in Listing 2 go here */ void load_file(void *mem, const char *filename) { int fd; int nr; fd = open(filename, O_RDONLY); if (fd == -1) { fprintf(stderr, "Cannot open %s", filename); perror("open"); exit(1); } while ((nr = read(fd, mem, 4096)) != -1 && nr != 0) mem += nr; if (nr == -1) { perror("read"); exit(1); } close(fd); } #define MEMORY_SIZE (0x1000000) /* 16 Mb */ #define FIRST_VCPU (0) int main(int argc, char *argv[]) { kvm_context_t kvm; void *memory_area; /* Second argument is an opaque, we don't use it yet */ kvm = kvm_init(&my_callbacks, NULL); if (!kvm) { fprintf(stderr, "KVM init failed"); exit(1); } if (kvm_create(kvm, MEMORY_SIZE, &memory_area) != 0) { fprintf(stderr, "VM creation failed"); exit(1); } #ifndef KVM_VERSION_LESS_THAN_65 if (kvm_create_vcpu(kvm, FIRST_VCPU)) { fprintf(stderr, "VCPU creation failed"); exit(1); } #endif memory_area = kvm_create_phys_mem(kvm, 0, MEMORY_SIZE, 0, 1); load_file(memory_area + 0xf0000, argv[1]); kvm_run(kvm, FIRST_VCPU); return.
|
http://www.linuxjournal.com/magazine/linux-kvm-learning-tool?page=0,1&quicktabs_1=1
|
CC-MAIN-2016-18
|
refinedweb
| 718
| 50.97
|
I'm still learning, and although having designed systems execs, operating systems, and real time safety critical systems, I'm sure that I still don't understand the mechanisms at work within micropython's asyncio.
I'm sure that I'm not using it correctly, I can see latency on the processor with my added co-routine compared to the other processors running Peter's unadulterated asynchronous resilient MQTT. I'm sure that it's the manner that the co-routines are being instantiated and added to the queue, but I'm guessing.
I'll post the modified code below, it's mainly unadulterated exemplar, but with just a simple asynchronous co-routine to flash an external LED.
I can observe the message receipts on all three esp32's via the blue LED: When running the identical unaltered exemplar code, they all receive pretty simultaneously; when one esp32 runs with my test coroutine, I can see latency, or indeed variable lag.
I'd like to understand how to use asyncio. What happens when I declare a co-routine via asyncio def function. Is a function instantiated then, and does it place it on the queue. Is there a method by which I can observe the queue, and what is the effect of the various event or sleep/wait triggers? Otherwise I'm merely experimenting.
The micropython documentation that I've discovered is good at getting you started, but as it's a particular implementation to fit the constraints of these microprocessors, I often fall foul of misuse.
If someone could point me to a specification of the objects/functions, help me switch on diagnostics so that I can the queue, and whether I'm abusing the mechanisms.
Indeed I'm pretty sure that in my simple example I'm not using the declarations in the right place, nor instigating the waits properly. I'm sure, because someone has already told me. Understanding those and telling me what I'm doing wrong would be an absolute boon.
So, help with specification (micropython), the mechanism of instantiation of co-routine, its placement in the queue, it's removal, and diagnostics would be great
Thanks
Robin
Here's the code:-
Code: Select all
# range.py Test of asynchronous mqtt client with clean session False. # (C) Copyright Peter Hinch 2017-2019. # Released under the MIT licence. # Public brokers # This demo is for wireless range tests. If OOR the red LED will light. # In range the blue LED will pulse for each received message. # Uses clean sessions to avoid backlog when OOR. # red LED: ON == WiFi fail # blue LED pulse == message received # Publishes connection statistics. from mqtt_as import MQTTClient, config from config import wifi_led, blue_led import uasyncio as asyncio import machine import gc loop = asyncio.get_event_loop() outages = 0 async def pulse(): # This demo pulses blue LED each time a subscribed msg arrives. blue_led(True) #await client.publish('foo_topic', 'Message received') await asyncio.sleep(1) blue_led(False) def sub_cb(topic, msg): print((topic, msg)) loop.create_task(pulse()) async def wifi_han(state): global outages wifi_led(not state) # Light LED when WiFi down if state: print('We are connected to broker.') else: outages += 1 print('WiFi or broker is down.') await asyncio.sleep(1) async def conn_han(client): await client.subscribe('foo_topic', 1) async def main(client): try: await client.connect() except OSError: print('Connection failed.') return n = 0 while True: await asyncio.sleep(5) print('publish', n) # If WiFi is down the following will pause for the duration. await client.publish('result', '{} repubs: {} outages: {}'.format(n, client.REPUB_COUNT, outages), qos = 1) n += 1 #import machine #pin2=machine.Pin(2,machine.Pin.OUT) #the inernal blue led pin4=machine.Pin(4,machine.Pin.OUT) #externally attached led async def testfunction (): while True: #gc.collect() #print ('LED On') pin4.on() await asyncio.sleep(0.05) #print('LED Off') pin4.off() await asyncio.sleep(5) #print('Out') # Define configuration config['subs_cb'] = sub_cb config['wifi_coro'] = wifi_han config['will'] = ('result', 'Goodbye cruel world!', False, 0) config['connect_coro'] = conn_han config['keepalive'] = 120 # Set up client MQTTClient.DEBUG = True # Optional client = MQTTClient(config) loop.create_task(testfunction()) try: loop.run_until_complete(main(client)) finally: # Prevent LmacRxBlk:1 errors. client.close() blue_led(True)
|
https://forum.micropython.org/viewtopic.php?p=38347
|
CC-MAIN-2020-50
|
refinedweb
| 696
| 59.09
|
Test your app with temporary, test data before you ship
This blog post was authored by Andrew Byrne, a Senior Content Developer on the Windows Phone Developer Content team.
– Adam
Sometimes I need data to test my app in the emulator. The Windows Phone 8 emulator is a powerful tool, but it doesn’t have the kind of data I need—a temporary, test data set that I won’t actually ship with the app. Today’s post explores one way to do this.
Let’s start with a little background. I have an app that uses images from the user’s photo library, contacts, and music library to create lock screen background images. Testing on a real device is my preferred way to test, and it trumps most other options. But I make an exception when I need to move quickly through initial, iterative testing. However, I couldn’t get far on the emulator with this app because I didn’t have enough test data.
The following screenshots show the standard emulator state of the photo library, contacts, and music library.
The emulator comes with eight standard images in the Sample Photo folder. While that’s a starting point, I really need to test with a larger number of images.
The Windows Phone 8 emulator has no contacts data and I can’t add my Microsoft account to load my contacts. The music library is just as empty.
For those of you familiar with Blend for Visual Studio, in Blend there’s a New Sample Data command that’s useful when you are prototyping, before you’ve written any code at all. Another Blend feature is Sample Data from Class, which is design-time only. Neither of these approaches suits my needs. In my case, I want to test with specific images at runtime. I am populating my data model with this data, not binding to it directly from the UI. So I am using a ‘sample data from developer’ approach which means I supply all test data, and I control when it is used.
Here’s the process I used to load temporary test data into the emulator:
1. Put the data you want to use for testing into a folder on your dev machine. For example, I created a folder called Test. As you can see in the following screenshot, I created a subfolder called albums to hold images of album covers, and a subfolder called faces for images of my contacts.
2. In Visual Studio, use File | Add Existing to add the Test folder and its contents to your solution, as illustrated in the following screenshot.
3. Select each file and make sure its Build Action is set to Content in the Properties pane. This means the file is included in the Content output group of your project, but won’t be compiled. It won’t be part of the binary you are building for your app, and it won’t affect your app size or its time to load and start on the emulator.
4. Set the property Copy to Output Directory to Copy always or Copy if newer. Using either setting ensures that the file is copied to the output and packaged with your app. Here’s an example:
At this point the test data will be copied into your app’s output directory when you build it. The data is part of the XAP package for the app. When you deploy the app to a phone or to the emulator, the installation directory of the app contains all of this test data, in the exact same folder hierarchy as you defined in Solution Explorer.
In my case, the project output folder contains the two folders in which I loaded my test data; the XAP file is 1.86 MB. I’m showing Debug mode output in the following screenshot, but at this point the same output would appear in Release mode.
I rename the XAP with the .zip extension so I can look inside. In the XAP (.zip) file I see the following:
My app package contains all my test data. In my case that translates to 160 images that make up 1.055 MB of the total compressed size of the app package. Nearly 57% of my XAP is taken up with test data that will never see the light of day in the published app! If the user is downloading my app on a slow network connection, this would be noticeable, and that’s not good.
Make sure your test data is deployed only in debug builds of your app
At this point, the test data is stored the app’s XAP file for both Debug and Release modes. To make sure it only exists when testing in Debug mode, do the following:
1. Open the project file for your project—the .csproj or .vbproj file in your solution
2. Locate your test data in that file. There will be one Content element for each file you added to the project.
3. Add a Condition attribute to each Content element. Here’s an example:
<Content Include="Testalbumsrawalbum103.jpg" Condition="'$(Configuration)' == 'Debug'"> <CopyToOutputDirectory>Always</CopyToOutputDirectory> </Content>
In this example, the compiler only includes album103.jpg as Content in my project if the Configuration it is building is Debug.
4. Add this attribute to all Content elements for each test file in your project file.
Now, when you build your app in Release mode, the test data won’t show up in the output and is not packaged in the XAP. In fact, in my case the XAP shrinks to roughly 717 KB. Not only will my customers be happier downloading my app at a reasonable size, but I also have a clean separation between my debug setup and my release setup. Test data I am interested in using as I debug my app isn’t part of the release build of my app.
Conditionally accessing test data
At this point, I have test data that shows up only when I build my app in Debug mode. What I want to do next is to make sure my app attempts to use this data only in Debug mode and, in my case, only on the emulator. Here are the important parts of my code that make this happen.
public async Task RefreshAsync(int maxCovers, bool random) { bool usingTestData = false;#if DEBUG if (Microsoft.Devices.Environment.DeviceType == DeviceType.Emulator) { LoadTestData(maxCovers, random); usingTestData = true; } #endif if (!usingTestData) { LoadRealData(maxCovers, random); }
The #if DEBUG … #endif conditional compilation statement wraps code that will run only when the project is built in Debug mode. The code inside that block then checks whether the app is running in the emulator. If it is, a call is made to LoadTestData, which uses the storage APIs to load the test images into my app. The usingTestData local variable is checked to see if test data is being used and, if not, then would proceed to load real data.
The following table summarizes all possible permutations of flow through the conditional code.
I marked the Emulator/Release permutation with an asterisk because in that case I am back to square one – calling my real API to load real data, which is nonexistent on the phone. That’s okay because my core testing scenario is using test data on the emulator with a debug build of my app. You can change these conditions to suit your needs. For example, if you also want to load test data for your app in debug, when it runs on your phone, just remove the check for Emulator.
When the app is deployed, the test data will be found in the installation folder for the app. Luckily, I can access that folder in code, as illustrated in the following lines of code:
private async void LoadTestData(int maxCovers, bool random) { _refreshing = true; var testImageFolder = await StorageFolder.GetFolderFromPathAsync(Package.Current.InstalledLocation.Path + @"Testfaces"); var images = await testImageFolder.GetFilesAsync();
I’m using the new Windows Runtime storage APIs in the proceeding code to access the installation folder, using the Package.Current.InstallationLocation.Path property.
So, that’s it. I now can use temporary test data during app development without shipping it with my app. This approach makes my early testing cycles using the emulator a lot more useful for my data-centric apps. I like it because I can define exactly what the expected test data will be and then step through my code in a very predictable manner.
Updated November 7, 2014 11:45 pm
Join the conversation
|
https://blogs.windows.com/buildingapps/2013/07/31/test-your-app-with-temporary-test-data-before-you-ship/
|
CC-MAIN-2016-36
|
refinedweb
| 1,431
| 62.78
|
i need a little help with this and since its thanksgiving weekend i cant get a hold of my teacher for help but the question is this.
A jailer has 1000 prisoners, locked behind doors which lock and unlock with the turn of a key. The king has ordered that some of the prisoners be freed. In particular, he wants them freed using the following algorithm. Starting with the first cell, turn the key once on each lock (hence unlocking all prisoners temporarily). Starting with the second cell, turn every other lock (500 locks), locking in half of the prisoners. Starting with the third cell, turn every third key, locking some prisoners in and unlocking others. Continue this until you have done this for starting on cell 1000.
Which prisoners are set free and how many times were their locks changed?
my array size is only 5 cause i'm working out which cells are locked/unlocked up to the 5th cell to make sure its all working correctly which it is not. i've tried to figure this thing out all day . i'm not looking for the answer but really for what i'm missing in the logic progression of turning the keys.
Code:#include <iostream> using namespace std; const int arraysize = 5; int main() { int i, j, startcell, lockcount = 0; bool prisonerlocked[arraysize]; // initialize the cells to false- cell open for (j = 0; j < arraysize; j++) { prisonerlocked[j] = false; } // open and close cells according to the algorithm for (startcell = 1; startcell < arraysize; startcell++) { for (i = startcell; i < arraysize; i++) { if (prisonerlocked[i + startcell] == true) { prisonerlocked[i + startcell] = false; } else if (prisonerlocked[i + startcell] == false) { prisonerlocked[i + startcell] = true; } } } // print out value of each cell to verify i'm right for (i = 0; i < arraysize; i ++) cout << prisonerlocked[i] << endl; return 0; }
|
http://cboard.cprogramming.com/cplusplus-programming/85800-looking-little-bit-help-my-logic.html
|
CC-MAIN-2013-48
|
refinedweb
| 304
| 57.3
|
Every maker has their favourite sensor, board, component.
For me the humble L298N is my motor controller of choice, the DS18B20 for temperature sensing, and IR obstacle sensor for simple input.
But recently I stumbled upon the TMP36, another temperature sensor, but this is an analogue sensor. Meaning that the output of the sensor is a voltage, rather than a string of data, in the case of the DS18B20.
After having used the sensor with a Raspberry Pi, via an MCP3008 ADC. I wondered if I could use it with micro:bit and the micro Python language.
So here goes.
Hardware
The circuit for the project is really simple. We just connect the 3V and GND from the micro:bit to the sensor. Then we connect the middle output pin of the TMP36 to pin0 of the micro:bit.
Connecting the TMP36 with croc clips is possible, but be careful that the pins/clips don't touch each other.
I'm lucky enough to have a Proto-Pic exhi:bit provided by CPC for a future micro:bit expansion board mega test that I am working on.
Now plug in the micro USB lead to your micro:bit, and then connect it to your computer, where it should appear as a USB flash drive.
Software
Everyone who knows me, knows that I love Python so I coded this project using the mu editor
Code
To start, I imported all of the microbit library.
from microbit import *
Then I created a while True loop which will continuously run the code inside of the loop. Remember the code inside the loop is now indented.
while True:
Now we read the raw analog voltage which the output pin of our TMP36 is sending to pin0 of the micro:bit. Then we multiply that value by the output of 3300 (milivolts) divided by the number of analog steps that our micro:bit can read (0 to 1023).
raw = pin0.read_analog() * (3000 / 1023.0)
To convert the values into something human readable, in this case the temperature in Celsius. We have to do a little more maths. We take 100 from the raw value, then divide it by 10, finally we subtract 40.
temp_C = ((raw - 100.0) / 10) - 40.
Now in order to see the value, we print it to the Repl. But to keep the temperature values neat, we restrict them to 3 decimal places using
round()
print(round(temp_C,3))
Lastly we sleep for 1 second, before the loop repeats.
sleep(1000)
Complete code listing
from microbit import * while True: raw = pin0.read_analog() * (3000 / 1023.0) temp_C = ((raw - 100.0) / 10) - 40.0 print(round(temp_C,3)) sleep(1000)
Flash the code to your micro:bit!
Click on Flash, to send the code to your attached micro:bit. When ready click on Repl to see the temperature data scroll across the screen.
So what can I do with this?
If you have a few micro:bits then you could use the
radio functionality to send temperature data to a central micro:bit which can then collate and act upon the data it receives.
Accuracy
The TMP36 can be around ±2°C so for any mission critical / scientific projects, this sensor is not highly accurate.
|
http://bigl.es/using-a-tmp36-sensor-with-micro-bit/
|
CC-MAIN-2017-26
|
refinedweb
| 541
| 74.08
|
Created on 2009-06-07 21:30 by Neil Muller, last changed 2010-02-09 16:56 by pitrou. This issue is now closed.
In py3k, ElementTree no longer correctly converts characters to entities
when they can't be represented in the requested output encoding.
Python 2:
>>> import xml.etree.ElementTree as ET
>>> e = ET.XML("<?xml version='1.0'
encoding='iso-8859-1'?><body>t\xe3t</body>")
>>> ET.tostring(e, 'ascii')
"<?xml version='1.0' encoding='ascii'?>\n<body>tãt</body>"
Python 3:
>>> import xml.etree.ElementTree as ET
>>> e = ET.XML("<?xml version='1.0'
encoding='iso-8859-1'?><body>t\xe3t</body>")
>>> ET.tostring(e, 'ascii')
.....
UnicodeEncodeError: 'ascii' codec can't encode characters in position
1-2: ordinal not in range(128)
It looks like _encode_entity isn't ever called inside ElementTree
anymore - it probably should be called as part of _encode for characters
that can't be represented.
Simple possible patch uploaded
This doesn't give the expected answer for the test above, but does work
when starting from an XML file in utf-8 encoding. I still need to
determine why this happens.
> This doesn't give the expected answer for the test above
Which is obviously due to not comparing apples with apples, as I should
be using a byte-string in the py3k example.
>>> import xml.etree.ElementTree as ET
>>> e = ET.XML(b"<?xml version='1.0'
encoding='iso-8859-1'?><body>t\xe3t</body>")
>>> ET.tostring(e, 'ascii')
Fails without the patch, behaves as expected with the patch.
Updated patch - adds a test for this.
This regression is probably annoying enough to make it a blocker.
Umm. Isn't _encode used to encode tags and attribute names? The charref
syntax is only valid in CDATA sections and attribute values, which are
encoded by the corresponding _escape functions. I suspect this patch will
make things blow up on a non-ASCII tag/attribute name.
Did you look at the 1.3 alpha code base when you came up with this idea?
Unfortunately, 1.3's _encode is used for a different purpose...
I don't have time to test it tonight, but I suspect that 1.3's
escape_data/escape_attrib functions might work better under 3.X; they do
the text.replace dance first, and then an explicit text.encode(encoding,
"xmlcharrefreplace") at the end. E.g.
def _escape_cdata(text, encoding):
# escape character data
try:
# it's worth avoiding do-nothing calls for strings that are
# shorter than 500 character, or so. assume that's, by far,
# the most common case in most applications.
if "&" in text:
text = text.replace("&", "&")
if "<" in text:
text = text.replace("<", "<")
if ">" in text:
text = text.replace(">", ">")
return text.encode(encoding, "xmlcharrefreplace")
except (TypeError, AttributeError):
_raise_serialization_error(text)
The attached patch includes Neil's original additions to test_xml_etree.py.
I also noticed that _encode_entity wasn't being called in ElementTree in
py3k, with the important bit being the nested function
escape_entities(), in conjunction with _escape and _escape_map.
In 2.x, _encode_entity() is used after _encode() throws Unicode
exceptions [1], so I figured it would make sense to take the core
functionality of _escape_entities() and integrate it into _encode in the
same fashion -- when an exception is thrown.
Basically, I:
- changed _escape regexp from using "[\x0080-\uffff]" to "[\x80-xff]"
- extracted _encode_entity.escape_entities() and made it
_escape_entities of module scope
- removed _encode_entity()
- added UnicodeEncodeError exception in _encode()
I'm not sure what the expected outcome is supposed to be when the text
is not type bytes but str. With this patch, the output has
b"tãt" rather than b"tãt".
Hope this is a step in the right direction.
[1] ElementTree.py:814, ElementTree.py:829, python 2.7 HEAD r50941
That's backwards, unless I'm missing something here: charrefs represent
Unicode characters, not UTF-8 byte values. The character "LATIN SMALL
LETTER A WITH TILDE" with the character value 227 should be represented as
"ã" if serialized to an encoding that doesn't support non-ASCII
characters.
And there's no need to use RE:s to filter things under 3.X; those parts of
ET 1.2 are there for pre-2.0 compatibility.
Did you try running the tests with the escape function I posted?
Thanks for the explanation -- looks like I was way off base on that one.
I took a look at the code you provided but it doesn't work as a drop-in
replacement for _escape_cdata, since that function returns a string
rather than bytes.
However taking your code, calling it _encode_cdata and then refactoring
all calls _encode(_escape_cdata(x), encoding) to _encode_cdata(x,
encoding) seems to do the trick and passes the tests.
Specific example:
- file.write(_encode(_escape_cdata(node.text), encoding))
+ file.write(_encode_cdata(node.text, encoding))
One minor modification is to return the string as is if encoding=None,
just like _encode:
def _encode_cdata(text, encoding):
# escape character data
try:
text = text.replace("&", "&")
text = text.replace("<", "<")
text = text.replace(">", ">")
if encoding:
return text.encode(encoding, "xmlcharrefreplace")
else:
return text
except (TypeError, AttributeError):
_raise_serialization_error(text)
effbot, do you have an opinion about the latest patch? It'd be nice to
not have to delay the release for this.
I disagree with this report being classified as release-critical - it is
*not* a regression over 3.0 (i.e. 3.0 already behaved in the same way).
That it is a regression relative to 2.x should not make it
release-critical - we can still fix such regressions in 3.2.
In addition, there is an easy work-around for applications that run into
the problem - just use utf-8 as the output encoding always:
py> e = ET.XML(b"<?xml version='1.0'
encoding='iso-8859-1'?><body>t\xe3t</body>")
py> ET.tostring(e,encoding='utf-8')
b'<body>t\xc3\xa3t</body>'
+1 for Py3.1.1
Either way, it would be nice to get feedback so we can iterate on the
patch or close out this issue already :-)
The patch looks ok to me.
Committed in r78123 (py3k) and r78124 (3.1). I've also removed _escape_cdata() since it wasn't used anymore. Thanks Jerry for the patch.
|
https://bugs.python.org/issue6233
|
CC-MAIN-2021-39
|
refinedweb
| 1,033
| 58.79
|
Helper class for buffering a
std::istream.
More...
#include <buffered_istream.hpp>
Helper class for buffering a
std::istream.
This class is used to buffer a
std::istream which is used for small reads; a character at a time. The
std::istream needs to create a sentinel object for every read and profiling showed the
std::istream class was causing a lot of overhead when parsing WML. This class helps by reading chunks from the
std::stream and store them in an internal buffer. Then the next request can deliver data from this buffer.
Since the class is only designed for small reads it only offers the get() and the peek() to get data and eof() to signal the end of data. The original stream should not be used from, while being owned by this class.
Definition at line 42 of file buffered_istream.hpp.
Definition at line 46 of file buffered_istream.hpp.
Is the end of input reached?
Definition at line 101 of file buffered_istream.hpp.
Referenced by preprocessor_data::get_chunk(), preprocessor_data::read_line(), preprocessor_data::read_rest_of_line(), preprocessor_data::skip_eol(), and preprocessor_data::skip_spaces().
Gets and consumes a character from the buffer.
Definition at line 61 of file buffered_istream.hpp.
References buffer_, buffer_offset_, c, eof_, fill_buffer(), and UNLIKELY.
Referenced by preprocessor_data::get_chunk(), tokenizer::next_char_fast(), preprocessor_data::read_line(), preprocessor_data::read_rest_of_line(), preprocessor_data::read_word(), preprocessor_data::skip_eol(), and preprocessor_data::skip_spaces().
Gets a character from the buffer.
This version only gets a character, but doesn't consume it.
Definition at line 88 of file buffered_istream.hpp.
References buffer_, buffer_offset_, eof_, fill_buffer(), and UNLIKELY.
Referenced by preprocessor_data::get_chunk(), tokenizer::peek_char(), preprocessor_data::read_rest_of_line(), preprocessor_data::read_word(), and preprocessor_data::skip_spaces().
Returns the owned stream.
Definition at line 107 of file buffered_istream.hpp.
Referenced by tokenizer::tokenizer(), and tokenizer::~tokenizer().
Buffer to store the data read from
std::istream.
Reading from
std::istream isn't to fast, especially not a byte at a time. This buffer is used to buffer x bytes at a time. The size of the buffer is determined experimentally.
Definition at line 124 of file buffered_istream.hpp.
Referenced by get(), and peek().
The offset of the current character in the buffer.
buffer_[buffer_offset_] is the current character, and can be peaked or consumed.
Definition at line 145 of file buffered_istream.hpp.
Referenced by get(), and peek().
When buffering the data there might be less data in the stream as in the buffer. This variable contains the exact size of the buffer. For example the last chunk read from the stream is unlikely to have the same size a buffer_.
Definition at line 134 of file buffered_istream.hpp.
Is the end of input reached?
Definition at line 148 of file buffered_istream.hpp.
Referenced by eof(), get(), and peek().
The input to read from.
Definition at line 115 of file buffered_istream.hpp.
|
http://devdocs.wesnoth.org/classbuffered__istream.html
|
CC-MAIN-2017-51
|
refinedweb
| 457
| 53.47
|
10 Packages for Your Next Django Project
Last week I attended DjangoCon Europe 2019 in the beautiful city of Copenhagen. The organizers did a great job putting together a fantastic event with three days of insightful talks covering a wide variety of topics followed by two days of sprints. I had a great time catching up with old friends and meeting new ones.
Also, I had the opportunity to give a breakout session titled “Behind the Curtain – How Django handles a Request” where I took a room full of Djangonauts on a deep dive through the bits of Django that are touched by every HTTP request, but rarely by any human.
If you missed out on all that fun, don’t despair, recordings of the talks are online.
Now, it’s easy to return from a conference with a warm fuzzy feeling, but without any actionable takeaways. To counter that, I compiled a smorgasbord of some great Python packages, modules, and classes that I learned about at DjangoCon Europe this year.
📦 IPython.display.JSON – Displays an interactive JSON widget in Jupyter Notebook.
Mentioned in Jupyter, Django, and Altair – Quick and dirty business analytics by Chris Adams
When I want to hash out a piece of code, I regularly use Jupyter Notebook for that. IPython.display.JSON displays a JSON object as an interactive widget, which comes in handy when working with large JSON objects. It takes a
dict or list, but it can also read directly from a file or URL. The IPython.display module contains more goodies of that kind. You can easily use Jupyter Notebook with Django with the
shell_plus command from
django-extensions.
📦 requests-respectful – A wrapper around requests to work within rate limits.
Mentioned in Fetching data from APIs (GitHub) using Django and GraphQl without hitting the rate limits by Manaswini Das
When working with external APIs, you usually have to honor some kind of rate limit. That sounds simple but becomes quite complicated once multiple threads, processes or machines are involved. requests-respectful solves this by using Redis to keep track of your API usage.
from requests_respectful import RespectfulRequester rr = RespectfulRequester() # This can be done elsewhere but the realm needs to be registered! rr.register_realm("Github", max_requests=100, timespan=60) response = rr.get("", params={"foo": "bar"}, realms=["Github"], wait=True)
📦 structlog – A library to make logging both less painful and more powerful.
Mentioned in Logging Rethought 2: The Actions of Frank Taylor Jr. by Markus Holtermann
Python’s
logging module can be challenging to wrap your head around.
structlog aims to simplify it. Instead of just logging a message, it allows you to easily add key/value pairs that carry more information about an event.
>>> import structlog >>> log = structlog.get_logger() >>> log.msg("greeted", whom="world", more_than_a_string=[1, 2, 3]) 2016-09-17 10:13.45 greeted more_than_a_string=[1, 2, 3] whom='world'
📦 django-stubs – Type stubs and a mypy plugin for Django.
Mentioned in Lightning Talk Type-checking your Django by Seth Yastrov
I think optional static typing is a great compromise between the rigidness of a statically typed language and the potential maintenance issues of dynamic typing.
mypy is a powerful tool in the Python developers toolbox, but the highly dynamic nature of Django limits its usefulness.
django-stubs improves the type checks
mypy can do. Also check out djangorestframework-stubs.
📦 OSMGeoAdmin – An GeoDjango admin class for geometries that uses Open Street Map.
Mentioned in Maps with GeoDjango, PostGIS and Leaflet by Paolo Melchiorre
GeoDjango has special admin classes to view and edit geometries. This particular one uses Open Street Map for visualization. This one might not be big news for anyone familiar with GeoDjango, but I just recently started using and somehow
completely missed this part of the docs.
📦 django-money – Money fields for django forms and models.
Mentioned in Building a custom model field from the ground up by Dmitry Dygalo
I got 99 problems but money ain’t one. At least storing monetary values in a Django application isn’t a problem anymore, thanks to the
MoneyField from
django-money. It not only stores the amount correctly but also the currency and knows how to display it correctly.
django-money packs a ton of other features, like validation, currency conversion, and Django REST framework support.
📦 bellybutton – A customizable, easy-to-configure linting engine for Python.
Mentioned in Maintaning a Django codebase after 10k commits by Joachim Jablon and Stéphane “Twidi” Angel
If you are looking for a way to lint your code that goes beyond
pylint and
flake8,
bellybutton might be for you. It allows you to write custom rules using
astpath, an XPath-like syntax for querying Python ASTs.
📦 django-classy-settings – A Class-based way to manage Django settings.
Mentioned in Lightning Talk Another View on Handling Settings by Curtis Maloney
For any non-trivial Django project, you almost always end up with slightly different settings for different environments. There are various solutions for this, for example, having multiple settings files starting with a
from base_settings import * statement.
django-classy-settings solves this problem using a class-based approach.
📦 confucius – An easy way to provide environ backed config in your projects.
Mentioned in Lightning Talk Another View on Handling Settings by Curtis Maloney
This package is a fresh take on environment-specific settings. It also uses classes but comes with the additional twist that it can read settings from environment variables and uses type annotation to automatically convert them to the right type. With a
__getattr__ at the module level, a new feature in Python 3.7 (see PEP 562), you can write a pretty neat settings file:
from confucius import BaseConfig class Config(BaseConfig): DB_HOST = 'localhost' DB_PORT = 5432 __getattr__ = Config.module_getattr_factory()
📦 django-context-decorator – A decorator to build the context of a class based view.
Mentioned on Twitter by Tobias Kunze
Instead of overwriting the
get_context_data in your class-based view, just add this decorator to any attributes or methods of your class and they will be added automagically. This one wasn’t mentioned at the conference, but it was created during the sprints (and I read about it on Twitter while I procrastinated on this article), so I think it belongs here.
from django_context_decorator import context from django.utils.functional import cached_property from django.views.generic import TemplateView class MyView(TemplateView): template_name = 'path/to/template.html' @context def context_variable(self): return 'context value' @context @property def context_property(self): return 'context property' @context @cached_property def expensive_context_property(self): return 'expensive context property'
There you have it, 10 packages that will save you hours of development time. Obviously, this is only a small sample of all the packages and tools that were mentioned at DjangoCon Europe 2019. If you were there, your list might look very different. If you need a good argument to convince your boss (or yourself) why attending a conference like DjangoCon is worth it, just point them to this page.
Header photo by Nick Karvounis, Code examples from the documentation of the respective packages.
|
https://consideratecode.com/2019/04/20/10-packages-for-your-next-django-project/
|
CC-MAIN-2022-05
|
refinedweb
| 1,168
| 53.81
|
Setup Single-node Kubernetes Cluster on a Home Lab server using k0s can setup a k8s cluster on your home lab server with k0s. Fortunately, I have a miniPC box with CPU Atom 1.44Ghz , 4GB RAM that I don’t use much. So here come this blog.
TL;DR
This blog guides you how to setup a single-node Kubernetes cluster with k0s on a miniPC running Ubuntu 20.04 OS and install ingress-nginx and cert-manager and a sample whoami application that can be publicly accessed over the Internet.
Prerequisites
- Broadband internet with a good download/upload bandwidth — in my case it is 500/500Mbps
- A home lab PC, at least 2 CPU cores, 2GB of RAM, connected to the your home broadband network
- A public IP from your internet service provider — this may incur additional cost, check with your ISP
Install Ubuntu Server 20.04 on Home Lab server
Download Ubuntu Server 20.04 ISO file from its official site.
Make a bootable USB stick from the downloaded ISO file follow this official instruction.
Once you get the bootable USB stick, restart your PC and boot with your USB. Just follow on-screen instruction to install Ubuntu server.
Configure Ubuntu
Update System
Once the installation complete, log on and update the system.
apt-get update && apt-get -y upgrade
Configure IP Address
Configure your DHCP server in your router to issue a fixed IP address to your server. Try to assign an IP address that outside of your DHCP range to avoid collision. For example, your router may have DHCP range from 192.168.1.100 to 192.168.1.255. Then you may assign 192.168.1.10 to your server.
Configuration steps are vary by router. If not possible, you may need configure IP manually on Ubuntu. The instruction to do so is beyond this blog.
After DHCP is configured, you may restart the server and check IP address if it is configured properly.
# Reboot
sudo reboot# Show local IP address
hostname -I | awk ‘{print $1}’
Setup SSH
Add your SSH key to the server by executing this command on another computer/laptop.
ssh-copy-id yourusername@ip_address
Edit the SSH config file
sudo nano /etc/ssh/sshd_config
Change the following lines for better security
PermitRootLogin no
PasswordAuthentication no
Restart SSH
systemctl restart sshd
Test SSH from another computer/laptop and make sure you can connect and logon.
ssh yourusername@ip_address
Configure SSH for Remote Access
If you want to SSH to your server from the internet, you need to configure port forwarding on your router. Choose the external port other than 22 for better security. This vary by router so the instruction is beyond this blog.
Once configured, you may test the connection.
ssh your_username@external_ip -p external_port
You can see your external IP address by executing this command on your server.
curl ipv4.icanhazip.com
If your public IP is static then this is fine. You can use the IP address to connect as it is fixed and will never change.
But most of ISPs won’t usually give you a static one but dynamic one i.e. your external IP keeps changing. In this case, you may need to configure Dynamic DNS (DDNS) settings on your router so you can access your server using a domain name and you don’t need to worry about the IP.
Again, the steps to configure DDNS are vary by router and will not be included in this blog.
To save some keys on SSH command, you can create the file
~/.ssh/config with the following content:
Host youralias
User your_username
Port external_port
Hostname your_domain_name
Then you can easily make SSH connection in short like this
ssh youralias
Unattended Upgrade
You can also configure the system to automatically upgrade and restart by following steps I wrote in this blog. Look for Setup Unattended Upgrades title.
Install k0s
Follow its official instruction by executing this command.
curl -sSLf | sudo sh
Check if it is install properly
$ k0s version
v0.11.0
Setup the Cluster
Building Config
Generate default config YAML file to a folder that make it accessible by root .
sudo k0s default-config > /root/k0s.yaml
Edit the file and append the following extension. Make sure you update two things:
- The IP address range load balancer —Use a small range of available and addressable IP addresses in your local networks that outside DHCP scope to avoid collision. In this example, I use 192.168.1.20–192.168.1.25.
- The email address in the
cluster-issuersection.
This will install ingress-nginx and cert-manager on your cluster in one shot.
Start the Cluster
Install the cluster from our configuration file and start the server.
sudo k0s install controller -c /root/k0s.yaml — enable-worker
sudo systemctl start k0scontroller.service
Wait a bit and check server status
$ sudo k0s status
Version: v0.11.0
Process ID: 1472
Parent Process ID: 1
Role: controller+worker
Init System: linux-systemd
Check server node
$sudo k0s kubectl get nodes
NAME STATUS ROLES AGE VERSION
urserver1 Ready <none> 6d v1.20.4-k0s1
NOTE: After a few minutes, if you still don’t see the node then try to restart the service.
sudo systemctl restart k0scontroller.service
Reset the Cluster
In case your cluster go wrongly and you want to resetup the cluster then use this command to reset the cluster and restart the server before repeating the steps.
sudo k0s reset
sudo reboot
Remote Access to the Cluster
To access the cluster with kubectl, you need to copy the kubeconfig file.
sudo cp /var/lib/k0s/pki/admin.conf ~/admin.conf
And transfer to your other computer.
scp your_username@your_server:~/admin.conf ~
Edit the file and replace
server field with your server IP address:port. If you decide to open this port (6443) to the internet. This could be your server’s external IP or domain name:external port.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ***
server:
name: local
...
Test connection
$ export KUBECONFIG=~/admin.conf
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:28:09Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.4-k0s1", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-03-03T07:31:16Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
If you server’s version information then the connection is successful.
Verify Cluster
Check if all pod are up and running. You may need to wait for a while until all pods are up.
kubectl get pods --all-namespaces
Check if all Helm charts are installed as expected
helm list --all-namespaces
Check ingress external IP. It should be your load balancer IP address on your local network that you defined in
k0s.yaml file when setting up the server.
kubectl get services --namespaces kube-system
Forward Ports
To make your web application accessible from the internet, you need to forward the port 80 and 443. The configuration steps should be the same as you did for SSH (port 22) and kubectl (port 6443).
Eventually, you may have these port forwarding configuration on your router (Assume 192.168.1.10 is the server’s IP and 192.168.1.20 is the LB’s IP).
Ext.Port Int.Port Server IP
-------- -------- ---------
12322 22 192.168.1.10
12343 6443 192.168.1.10
80 80 192.168.1.20
443 443 192.168.1.20
Test Deploying whoami
Prepare Configuration File
Create a new namespace
kubectl create namespace whoami
Create a YAML file
whoami.yml with the following content
Deploy Application and Test
Apply the YAML file to the namespace
$ kubectl apply -f whoami.yml --namespace whomai
deployment.apps/whoami-deployment created
service/whoami-service created
ingress.networking.k8s.io/whoami-ingress created
Check the status
kubectl get all --namespace whoami
Try accessing your application at
Configure HTTPS
To have a proper SSL certificate for HTTPS connection, you need a valid domain name. Then create a DNS A record pointing to your public IP address. The instruction is vary by DNS service provider and won’t be included here.
Check if DNS record can be resolved properly
$ nslookup whoami.yourdomain.com 8.8.8.8
Server: 8.8.8.8
Address: 8.8.8.8#53Non-authoritative answer:
Name: whoami.yourdomain.com
Address: xxx.xxx.xxx.xxx
Create a new ingress file
ingress.yml with the following context:
Reapply the ingress
$ kubectl apply -f ingress.yml --namespace whoami
ingress.networking.k8s.io/whoami-ingress configured
Watch the pods. You will see the a cert-manager pod is created to auto-provision SSL certificate and then get terminated. This process usually takes less than one minute.
kubectl get pod --watch --namespace whoami
Check the certificate status and you should see True in the Ready column.
$ kubectl get certificates --namespace whoami
NAME READY SECRET AGE
whoami True whoami 2m56s
Now, test accessing your application at and it should redirects to HTTPS with valid SSL certificate.
Clean up
Once you finished testing, you can delete whoami.
kubectl delete all --all --namespace whoami
kubectl delete namespace/whoami
Caveats
k0s worker node is sometimes not up and running
when you start the cluster first time, sometimes the worker node is not up even after some mount of time. In this case, restarting the service is usually help.
sudo systemctl restart k0scontroller.service
Cannot use CNAME record with cert-manager
In my environment, my public IP is dynamic and mapped to a DNS using Dynamic DNS (DDNS) service. I have my own custom domain that I try to map to DDNS domain using CNAME record. But that doesn’t seem to work for cert-manager.
I end up mapping my custom domain using a A record and thinking about to setup an automated task to regularly check and update DNS A record.
But please let me know if you find a better way to deal with this.
|
https://pacroy.medium.com/setup-single-node-kubernetes-cluster-on-a-home-lab-server-using-k0s-594e32624399?responsesOpen=true&source=---------3----------------------------
|
CC-MAIN-2021-21
|
refinedweb
| 1,673
| 58.28
|
Hi,
I have a doubt regarding inheritance of base class constructor by inherited class.
I was recently going through some C++ online tutorial when I came across this piece of code:
#include <iostream> class Foo { public: Foo() { std::cout << "Foo's constructor" << std::endl; } }; class Bar : public Foo { public: Bar() { std::cout << "Bar's constructor" << std::endl; } }; int main() { // a lovely elephant ;) Bar bar; }
I will quote the exact text present in the site from where I took the above code :
The object bar is constructed in two stages: first, the Foo constructor is invoked and then the Bar constructor is invoked. The output of the above program will be to indicate that Foo's constructor is called first, followed by Bar's constructor.
Now, my question is:
Will the constructor Foo() be called when an object of class Bar is declared?
This is the site from where I got the above code:
Initialization Lists in C++
Now according to the site, the constructor Foo() will be called. But as per my knowledge, this is not true since base class constructors are not inherited. But I want to confirm this.
Regards,
|
https://www.daniweb.com/programming/software-development/threads/302353/inheritance-of-base-class-constructor
|
CC-MAIN-2017-17
|
refinedweb
| 191
| 61.9
|
The Raspberry PiRaspberry Pi has been a huge phenomenon in the maker scene, spawning a wide range of accessories, add-ons and even specialised 'Hats'. One of the most underappreciated peripherals would have to be the small Raspberry Pi camera boardRaspberry Pi camera board. At first look it might seem to be the same as any cheap USB camera, but its functionality is far beyond that. The special 15 pin ribbon cable the camera board uses enables it to interface directly with the graphics processor on a Raspberry Pi. This allows it to use the full power of the Pi for all the image processing and heavy lifting, rather than relying on processors in the camera itself. This, combined with its reasonably high fidelity photo sensor, lets the camera produce great quality images and videos at a reasonably low cost. The other feature of the camera board is its ability to be activated and controlled from various coding languages. There is an extensive library of code for the Python programming language and it can also be used right from terminal in a compatible Linux distribution.
For a project using the camera board, I mounted it along with a Raspberry Pi and a small LCD screen into a car to be used as a DVR system. Car DVRs are very popular tools both for private and commercial vehicles - they are used to provide evidence in resolving traffic disputes and aid in insurance claims. If your car is your pride and joy, having video evidence of a crash can be the difference between getting an insurance payout and being stuck with an expensive wreck.
For this project the Raspberry Pi can be running a number of Operating System. The requirement is that it has to run Python and be compatible with the camera board and GPIO modules. For a quick and easy setup I used a NOOBs pre-loaded microSD cardNOOBs pre-loaded microSD card. It comes pre-loaded so there is no need to format or write an SD card, it's ready to go right out of the box. By holding Shift on a keyboard right after boot you'll get to the recovery menu that will let you choose different operating system. For this project I stuck with the pre-loaded Raspbian.
Depending on the electrical system in your car, powering a Raspberry Pi from it can cause difficulties. When starting the engine or switching on electric devices, such as headlights, the power from the battery or alternator can momentarily drop. Modern car stereos are designed to tolerate this, but the humble Raspberry Pi is not - any significant drop will cause it to fully loose power then reboot. This can be annoying at best, forcing a system reboot every time you want to start the engine or switch the headlights to full beam. A solution to this is using a UPS - Uninterruptable Power Supply.
A UPS works by having a battery that can power devices when incoming current temporarily drops out. My low cost UPS solution was using a super cheap, unbranded, USB Power Bank - small external batteries that are commonly used to charge cell phones on the go. Unfortunately, most of these types of devices aren't the best solution. A normal UPS works by having an ultra fast, automatic switch that can toggle from the incoming power to a battery bank when the current drops below a certain level - switching fast enough that devices attached to the UPS do not notice any difference. Most cheap power banks, however, function by having the input power charge a battery and have the output power coming from the battery simultaneously. Inefficiencies in the battery and the circuitry used to charge it means that the power coming out is significantly less than the power coming in. Because of this some of these devices are unusable as a UPS while others may function but can have issues, such as the battery running flat even when it is being recharged. There are many such devices and each one is different. The brandless one I have functions adequately when powering a Raspberry Pi, but others may not.
The best way to have simple control of the cameras recording functionality is to connect a toggle switch to the GPIO on the Raspberry Pi. By using a GPIO switch to control the camera, you leave any keyboard, mouse or even touchscreen input unused and free for any other use by your chosen operating system. You can use a flick switch or a rocker, but to keep the footprint on the dashboard small I used a latching pushbuttonpushbutton. You may have seen other guides use pull-up resistors for buttons on the Raspberry Pi, but we can use a line of code to pull-up for us. Simply wire one of the button contacts to an available GPIO pin - in my case I chose #24, but any will do - and the other contact to one of the Pi's ground pins. The easiest way to do this is using female jumper wiresfemale jumper wires, just cut the end off two wires and solder them to each of the button contacts.
Latching PushbuttonsLatching Pushbuttons and Rocker SwitchesRocker Switches connected to jumper wires
Something that you don't release before seeing it in the flesh is just how small and light the camera board is. My original plan was to use a small piece of acrylic attached to suction cups to make a mount that sticks on the windshield. Unfortunately getting it to hinge on the right angle for a good view while still being solid enough to not wobble while recording was tough. The solution I found was to use a GPS suction mount that has a good hinge, using light adhesive and small screws to stick the camera board to its backside. It was solid enough to not shake while recording but could still be oriented for the best viewing angle out the windshield.
Camera board mounted on the windscreen
When mounting the camera board, be careful to check what your local laws and regulations are. Certain states in the US, including California, have laws against attaching any device to a vehicles windshield even if it is just temporarily adhered with suction cups. Fortunately, here in cold New Zealand no such laws exist.
I used an unbranded 7" LCD screen and HDMI adapter board as the monitor for the Raspberry Pi. These are very cheap, but also inconsistent in quality. Some are able to run on the 5 volts from a USB port, while others need a higher voltage - even between identical looking models. They are also prone to having flaws on the LCD, like dead or stuck pixels, and often have poor viewing angles. Hopefully when the long rumoured official Raspberry Pi touch screen is released these problems will be a thing of the past, but for now it is a case of buyer beware.
The full completed circuit
If your cars dashboard already has a screen built in it might be usable for this project. The Raspberry Pi can output a composite A/V signal, just like what every DVD player and game console used long before HDMI was common. If you have an "Aux Video" or "AV input" option you'll just need a Raspberry Pi 3.5mm to 3 RCA cableRaspberry Pi 3.5mm to 3 RCA cable to wire it in. Keep in mind that the video quality of composite video is significantly inferior to HDMI, but there will be no change in quality of any video recorded by the camera board. It'll be hard to read text, so the Raspberry Pi will most likely have to be set up on a HDMI monitor first.
Unfortunately the dash in my 1983 Toyota didn't have a great spot to put a screen. To hold it in place I used rubber washers attached to small bolts along with double sided mounting tape attached to a piece of thick, solid card. I put the latching buttons in the card, using a spade bit on a drill to get the right sized holes. Strong cable ties attached the card to my cars centre console. I left the USB power adapter exposed and ran the cables back just to allow all the wiring to be easily removed if necessary.
I attached the Raspberry Pi to a sheet of card and placed it securely in the glove compartment. In order for the camera to mount on the windshield I had to use a longer ribbon cable, one meter as opposed to the standard 15 centimetres. You have to be careful when doing this. Because the cable is unshielded, having a longer run can cause problems with the video signal and even make it not work at all depending on the level of interference.
A USB car charger is an easy way to get the voltage to the right level for a Raspberry Pi. It also useful as most units you buy have a fuse built in, giving extra protection if your cars electrics may not be that reliable. Be sure to use a good quality charger that gives out a constant, smooth voltage. My Pro-Power AC adapterPro-Power AC adapter has two USB ports, I used one for the Raspberry Pi and the other for the LCD screen.
To get the button to activate the camera recording I used this Python script.
import datetime import picamera import RPi.GPIO as GPIO GPIO.setmode(GPIO.BCM) GPIO.setup(24, GPIO.IN, pull_up_down = GPIO.PUD_UP) while True: GPIO.wait_for_edge(24, GPIO.FALLING) dvrname = datetime.datetime.now().strftime("%y%m%d_%H%M%S") with picamera.PiCamera() as camera: camera.resolution = (1920, 1080) camera.start_preview() camera.start_recording('/home/pi/' + dvrname + '.h264') GPIO.wait_for_edge(24, GPIO.RISING) camera.stop_recording() GPIO.cleanup()
From the top down:
- Line 1, 2 and 3 import modules for controlling the camera board, accessing the GPIO interface and for reading the date and time respectively.
- Line 4 selects what numbering system is used to identify GPIO pins - I used the BCM numbering, the alternative is to use Board numbering. Be sure to double check what system you used when attaching the buttons to the Raspberry Pi.
- Line 5 sets the pin used to be 'up' or 'down'. This line allows buttons to be used with the GPIO without having to wire resistors. Setting it to up means one side of the button should be attached to a ground pin to bring it down.
- Line 6 sets up a while loop that will check if a certain condition is met before proceeding.
- Line 7 detects when out chosen GPIO pin is Falling - going from being up to down, or more simply when the attached button is pressed.
- Line 8 builds a filename for our recorded video, based on the current date and time for unique file names for every video and for easy sorting of videos.
- Line 9 is when we start using the camera.
- Line 10 sets the resolution to 1920x1080, also called Full HD.
- Line 11 starts the preview, letting the video from the camera be displayed live on screen. Remove this line if you don't want to see what is being recorded.
- Line 12 starts the recording and outputs it to the filename that was made in line 8.
- Line 13 detects when the GPIO pin is pulled up, or when the button is switched off.
- Line 14 stops the recording.
- Finally, line 15 cleans up our used GPIO pins to prevent clashes or things getting out of hand.
I then saved this python script as 'camera.py' in the /home/pi/ directory.
In order to have this script running in the background while Raspbian is running, the file at /etc/rc.local needs to be edited. Open it up and add the following line to the very bottom.
python /home/pi/camera.py &
The ampersand and the end is important, it ensures the script is always running in the background and will not close.
This script is fairly reusable, you could substitute the camera function for other code to get a Raspberry Pi to do all kinds of things on the flick of a switch. You can also do more things with the camera, like adjusting for low light or recording in slow motion. Look through the picamera documentation for all the extra details.
Using a Raspberry Pi as the core of a car DVR system has a big advantage. Rather than having to take the SD card out of the car to review the video footage, adding a WiPi wireless adapterWiPi wireless adapter lets you copy the videos from the Pi wirelessly. In Raspbian, set the directory where the cameras video files are saved as a network share. Then setup the WiPi to connect to your home WiFi network. Now as long as your car is in range of your wireless access point you can copy the video files remotely to your computer or tablet.
A few final things:
- Its a good idea to wire in switches to the power going into everything, just to be able to keep everything turned off. It will also let you turn the Raspberry Pi on again after shutting it down without having to unplug it.
- If wiring the system using the feed directly from your car battery you can run the risk of getting a flat battery if you forget to shut everything down. I used the ignition power feed to ensure that the key has to be in the car for it to get power. You can also set a shutdown timer in Raspbian that will turn everything off after being idle for a set period.
- You can run out of SD card storage space very fast recording full HD video. Try setting a lower frame rate to save space over the default 30 frames per second, or connect a USB flash drive and use that to record to.
- After the installation you have a full Raspberry Pi installed in your car! If you disable the video preview you can do whatever you want while recording, the quad core processor in the Raspberry Pi 2 makes this multitasking work well. Try installing media players and connect the audio output to your stereo, or doing other fun things. Great for entertainment when waiting for people in your car! Just please don't try and watch a movie while driving. If that's not illegal where you are, it probably should be.
If you have any questions or suggestions, leave them on the comments below or you can contact me on Twitter - @aaronights.
|
https://www.element14.com/community/community/raspberry-pi/raspberrypi_projects/blog/2015/06/23/the-secrets-of-the-pi-camera--car-dvr-system
|
CC-MAIN-2018-39
|
refinedweb
| 2,449
| 68.81
|
Red Hat Bugzilla – Bug 1297788
repo-rss has missing 'Requires: libxml2-python'
Last modified: 2016-11-03 20:14:31 EDT
Also happens for
yum-utils-1.1.31-34.el7.noarch
easy reproducer (when yum-utils is already installed):
1. # rpm -e --nodeps libxml2-python
2. # yum -y reinstall yum-utils
3. # repo-rss -h
Traceback (most recent call last):
File "/usr/bin/repo-rss", line 25, in <module>
import libxml2
ImportError: No module named libxml2
+++ This bug was initially created as a clone of Bug #1155093 +++
Description of problem:
yum-utils needs to have a Requires: libxml2-python added as repo-rss requires libxml2-python.
Version-Release number of selected component (if applicable):
yum-utils-1.1.30-30.el6
How reproducible:
Always
Steps to Reproduce:
1. Do a minimal RHEL-6.6 install then install yum-ut.
|
https://bugzilla.redhat.com/show_bug.cgi?id=1297788
|
CC-MAIN-2018-05
|
refinedweb
| 140
| 51.04
|
Unit Testing, Agile Development, Architecture, Team System & .NET - By Roy Osherove
Ads via The Lounge
↑ Grab this Headline Animator\
Update: I've uploaded a more complete version of the framework and also changes the file name. please get it from here:
if you have the earlier version. (and just so you know - it works perfectly with the TestDriven.Net tool suite)
One of the things that have bothered me the most since I got into the whole "Add database rollback to your unit tests" thing, is how much work it takes to make your test suite use this feature. I actually went and made a new binary of the NUnit Framework to support this new attribute. All this because there is no clear extensibility model for NUnit these days. Peli, on the other hand has a very nice way of extending MbUnit, but it still entails recompiling his library for this to work (or am I wrong?)
so - an idea came to me. A while ago Peli told me he had just found out about ContextBoundObjects and the ability to "intercept" method calls for pre and post processing. He said it might have some cool things that can be used with unit testing but we couldn't find something that was really cool to do with it.
The other day, while reading this nice article about implementing interception in your code, I got an idea: Maybe interception and Contexts are the best way for extensibility? So, I gave it a shot. And it turns out pretty darn cool I have to say.
Introducing the XtUnit.Framework project
With this project you are now able to add any attribute you can think of to you NUnit (or any other xUnit tests) with the ease of simply deriving from a base class. In the solution that you can download you will find 3 projects:
now here are the cool things:
I'd love to get your feedback. Have fun :)
I'd like to thank Jamie for helping me solve two simple and annoying bugs I just couldn't find with my thick head.
HI Roy,
I'm getting this on my testing solution when I try to use the SampleTestFixture.cs copied into my solution. This is using TestDriven.net 2.0.1948 Personal :
TestCase 'M:dbTesting.SampleTestFixture.MyDataRelatedTest'
failed: Couldn't find declaring type with name 'dbTesting.SampleTestFixture'
Do you have an idea what's causing this ?
Thanks for your help!
Hi, I'm using DataRollBack attribute and it works fine with Sql Server...compliments...but now I want to use it even with an application that uses an Oracle 9i db accessed by ODP.NET . Reading your article, I've thought that it must work anyway, because ODP.NET is based upon ADO.NET . Enterprise Services manages transactions at this level...but it doesn't work, or rather, the transaction doesn't roll back...do you have any idea?
Hi, I've forgot to say that the Oracle server is on a remote machine. This is probably the cause of the problem. Of course XtUnit works on the local DTC...it is possible to connect on the ServiceDomain object of a remote machine? And how to do that?
Thanks for your help
Cosimo
I modified the code to use System.Transactions.TransactionScope...thus removing the need for the MSDTC and Enterprise Services.
public class RollBackAttribute:TestProcessingAttributeBase
{
TransactionScope transactionScope;
[DebuggerStepThrough]
protected override void OnPreProcess()
{
try
{
transactionScope = new TransactionScope(TransactionScopeOption.RequiresNew, new TimeSpan(0,0,0,10000,0));
}
catch(Exception e)
{
OutputDebugMessage("Could not enter into a new transaction:\n" + e.ToString());
}
[DebuggerStepThrough]
protected override void OnPostProcess()
Transaction.Current.Rollback();
transactionScope.Dispose();
OutputDebugMessage("Could not leave an existing transaction:\n" + e.ToString());
}
|
http://weblogs.asp.net/rosherove/archive/2004/10/05/238201.aspx
|
crawl-001
|
refinedweb
| 620
| 66.54
|
Categories: Python | S60 | Code Examples | How To
This page was last modified 05:33, 29 April 2008.
How to add a text to an image
From Forum Nokia Wiki
This article contains code snippets showing how text can be added to an image or a photo, using Python.
How to add text to an image:
from graphics import * bg=Image.new((240,240)) bg.text((30,30), u"Text", font="title") bg.save("C:\\a.jpg", quality=100)
How to add text to a photo (the procedure is the same as above:
from graphics import * photo=Image.open("c:\\myphoto.jpg") photo.text((45,30), u"Text goes here", font="normal") photo.save("c:\\mynewphoto.jpg")
See also: How to edit an image
|
http://wiki.forum.nokia.com/index.php/How_to_add_a_text_to_an_image
|
crawl-001
|
refinedweb
| 123
| 57.87
|
gocept.exttest 1.0
Helper to integrate external tests with python unittests.
Runs tests provided by an external command from Python’s unittest.TestCase.
Usage
gocept.exttest provides one public function, makeSuite, which returns a unittest.TestSuite and takes a single argument: the name of the external binary to run. (Any additional arguments will be passed to the external command as command-line parameters.)
Here’s a simple example:
import gocept.exttest def test_suite(): return gocept.exttest.makeSuite( 'bin/external_test_runner', '--some-arg', '--another-arg')
makeSuite calls the external command to ask for a list of test cases and test functions (see below for the exact protocol), and returns a TestSuite of TestCase objects that contain corresponding test methods. Each test method will call the external command to run its test, and converts the results returned by the external command to the conventions of the unittest module (e.g. raises AssertionError for failed tests, etc.).
Requirements
The external command needs to understand two command line parameters: --list and --run <test-specification>:
--list must return a list of available test cases and test functions formatted as JSON:
$ bin/external_test_runner --list [{"case": "MyExternalTestCase", "tests": ["test_one", "test_two"]}
--run is used to run one specific test, returning the results formatted as JSON:
$ bin/external_test_runner --run MyExternalTestCase.test_two [{"name": "MyExternalTestCase.test_two", "status": "FAIL", "message": "Test failed.", "traceback": "..."}]
NOTE: The custom JSON format for test results was chosen for simplicity when integrating with JavaScript (see below); we’ll have to evaluate whether the commonly used XML format from JUnitReport could be used instead.
If neither --list nor --run is given, the external command should run all tests:
$ bin/external_test_runner [{"name": "MyExternalTestCase.test_one", "status": "SUCCESS", "message": "Test passed."}, {"name": "MyExternalTestCase.test_two", "status": "FAIL", "message": "Test failed.", "traceback": "..."}]
Example: JavaScript
Running tests
We built gocept.exttest to integrate javascript unittests with python unittests. We’ve decided to use Jasmine as the javascript unittest framework, running under node.js via jasmine-node. (In order to use jasmine with gocept.exttest, we extended jasmine-node to support the --list / -run arguments and the JSON output format.)
In your buildout environment, install node.js and jasmine-node like this:
[buildout] parts = nodejs test [nodejs] recipe = gp.recipe.node npms = ${buildout:directory}/../jasmine-node scripts = jasmine-node [test] recipe = zc.recipe.testrunner eggs = your.package environment = env [env] jasmine-bin = ${buildout:directory}/bin/jasmine-node
You need to checkout the jasmine-node fork from until the changes are merged upstream. (In the example, ${buildout:directory}/../jasmine-node is used for its location.)
Writing tests
For example, let’s say the javascript tests should reside in your.package.tests. jasmine-node supports tests written in both JavaScript and CoffeeScript (by specifying the --coffee command-line parameter), and requires test files to have _spec in their name.
An example test might look like this (please refer to the Jasmine documentation for details)
require 'my_app.js' describe 'MyApp', -> it 'has read Douglas Adams', -> expect(new MyApp().calculate_the_answer()).toEqual(42)
Then wire up the tests as follows (the path to the external command is passed to the tests via an environment variable):
import gocept.exttest def test_suite(): return gocept.exttest.makeSuite( os.environ.get('jasmine-bin'), '--coffee', '--json', pkg_resources.resource_filename('your.package', 'tests'))
Development
The source code is available in the mercurial repository at
Please report any bugs you find at
Changelog
1.1 (unreleased)
- Nothing changed yet.
1.0 (2012-01-24)
- Reworked documentation.
0.1.4 (2012-01-20)
- Package description.
0.1.3 (2012-01-20)
- Improved the docs.
0.1.2 (2012-01-19)
- Repair broken release (again).
0.1.1 (2012-01-19)
- Repair broken release (0.1).
0.1 (2012-01-19)
- first release.
- Downloads (All Versions):
- 2 downloads in the last day
- 85 downloads in the last week
- 384 downloads in the last month
- Author: Wolfgang Schnerring <ws at gocept dot com>, Michael Howitz <mh at gocept dot com>
- License: ZPL
- Package Index Owner: sweh, wosc, icemac, zagy, tlotze, ckauhaus, DanielHavlikGocept, ctheune
- DOAP record: gocept.exttest-1.0.xml
|
https://pypi.python.org/pypi/gocept.exttest/1.0
|
CC-MAIN-2015-22
|
refinedweb
| 669
| 50.73
|
Created on 2018-02-08 08:46 by rhettinger, last changed 2018-02-11 09:29 by rhettinger. This issue is now closed.
This also applies to 3.6 because ChainMap can be used with OrderedDict.
An alternate implementation:
d = {}
for mapping in reversed(self.maps):
d.update(mapping)
return iter(d)
Unfortunately both implementations work only with hashable keys. In general case mappings may be not hashtables.
See discussion on PR 5586 regarding backporting to 3.6.x.
Sorry, I was wrong. reversed() is not needed here.
The advantage of this implementation is that it can be faster because of iterating mappings in C code instead of Python code. But the side effect of it is that the iterator keeps references to all values. If this is not desirable, the code can be written something like:
return iter(dict.fromkeys(itertools.chain.from_iterable(self.maps)))
New changeset 3793f95f98c3112ce447288a5bf9899eb9e35423 by Raymond Hettinger in branch 'master':
bpo-32792: Preserve mapping order in ChainMap() (GH-5586)
On PR 5586 we discussed reverting the order of the iteration by mappings. There are reasons of doing this. But this adds a subtle behavior change. Currently list(ChainMap({1: int}, {1.0: float})) returns [1]. With reversed() it will return [1.0]. This change LGTM.
The code in msg311969 doesn't reuse hash values. The following implementations do this:
return iter({**m for m in reversed(self.maps)})
or, without keeping reverences to values:
return iter(list({**m for m in reversed(self.maps)}))
New changeset 170b3f79506480f78275a801822c9ff1283e16f2 by Raymond Hettinger (Miss Islington (bot)) in branch '3.7':
bpo-32792: Preserve mapping order in ChainMap() (GH-5586) (#GH-5617)
> The code in msg311969 doesn't reuse hash values.
That doesn't make sense. The dict.update() method reuses the hashes of the input mappings when possible.
>>> from collections import ChainMap
>>> class Int(int):
def __hash__(self):
print(f'Hashing {self}', file=sys.stderr)
return int.__hash__(self)
>>> import sys
>>> d = { Int(1): 'f1', Int(2): 'f2' }
Hashing 1
Hashing 2
>>> e = { Int(1): 's1', Int(3): 's3' }
Hashing 1
Hashing 3
>>> c = ChainMap(d, e)
>>> list(c) # Note, no calls to hash() were made
[1, 3, 2]
I referred to msg311969, not msg311822.
|
https://bugs.python.org/issue32792
|
CC-MAIN-2019-30
|
refinedweb
| 367
| 69.38
|
java.lang.Object
oracle.dss.rules.Ruleoracle.dss.rules.Rule
public class Rule
An object that specifies:
Dataviewrenders an item
The property values are specified in a
Mergeable object.
The
applies method specifies whether the properties in the
Mergeable should be used.
In this class, the
applies method always returns
true, so the rule always applies.
Rule objects are normally stored in
RuleBundle
objects.
RuleBundle objects are stored in vectors, which are then
passed to a
Manager class, which runs all the rules
in all of the rule bundles.
The result is a single
Mergeable object that specifies all
of the settings that the
Manager should use to render an item
in a
Dataview.
Subclasses of this class can override methods, in order to produce more
specific rules.
For example, the
DiscriminatorRule specifies conditions under
which the rule applies.
Mergeable,
DiscriminatorRule,
RuleBundle, Serialized Form
protected Mergeable m_mergeable
public static final int RESET_NONE
public static final int RESET_XML_PROPERTIES
public static final int RESET_EVERYTHING
public Rule()
Mergeableobject. If you call this constructor, and the rule needs a
Mergeableobject, then you must also call the
setFixedMergeablemethod.
setFixedMergeable(oracle.dss.rules.Mergeable),
Mergeable
public Rule(Mergeable mergeable)
Mergeableobject.
mergeable- The
Mergeableobject whose property values should be used when this rule applies.
public void setID(long id)
id- A long number that identifies this
Rule. This id is internally set by our UI Panels.
public long getID()
Rule.
public java.lang.Object clone()
clonein class
java.lang.Object
public boolean applies(RuleContext context, Mergeable target) throws RuleException
true.
context- The context of the item that is to be rendered. The
Dataviewprovides this parameter.
target- The
Mergeableobject whose properties will be modified when the rule is fired, if this method returns true. Overriding methods may or may not use this parameter. This must be the same class as the object retrieved by calling the
getFixedMergeablemethod.
true. Overriding methods should return
trueor
false.
RuleException- If
contextis unusable for some reason or if this
Rulehas a problem.
public boolean runRule(RuleContext context, Mergeable target) throws RuleException
appliesmethod. If the
appliesmethod returns
true, then this method calls the
fireRulemethod.
The
RuleBundle method calls this method for each rule
in the bundle.
Extending classes may or may not choose to override this method.
(An implementation might skip the call to
applies altogether
and just call
fireRule.)
context- The context of the item that is to be rendered. The
Dataviewprovides this parameter.
target- The
Mergeableobject whose properties will be modified when the rule is fired, if this method returns true. Included in case the
appliesmethod wants to examine the
Mergeable. This must be the same class as, or a subclass of, the object retrieved by calling the
getFixedMergeablemethod.
trueif the
appliesmethod returns
true,
falseif not.
RuleException- If
contextis unusable for some reason or if this
Rulehas a problem.
public void fireRule(RuleContext context, Mergeable target) throws RuleException
Mergeableobject to include property settings that this
Rulespecifies. This implementation merges the specified
Mergeableobject with the
Mergeableobject of this
Rule. Merging the two objects adds the property settings from this
Ruleto those property settings that will be used when the
Dataviewrenders an item.
context- The context of the item that is to be rendered. The
Dataviewprovides this parameter. Subclasses might examine the context to determine how to modify the
Mergeableobject.
target- The
Mergeableobject whose properties will be modified. This must be the same class as, or a subclass of, the object retrieved by calling the
getFixedMergeablemethod.
RuleException- If
contextis unusable for some reason or if this
Rulehas a problem.
public Mergeable getFixedMergeable()
Mergeableobject that specifies property settings that should take effect when this
Ruleapplies.
Mergeablethat specifies the property settings for this
Rule.
public void setFixedMergeable(Mergeable mergeable)
Mergeableobject whose property values should be set whenever this
Ruleapplies.
mergeable- The object that specifies property values that should be set whenever this
Ruleapplies.
public oracle.dss.util.xml.ObjectNode getXML(boolean allProperties, ComponentTypeConverter converter)
allProperties-
trueto store all property values in XML,
falseto store only values that are different from default values.
converter- A class that converts view component identifiers from strings to integers and back.
Dataviewobjects implement the
ComponentTypeConverterinterface.
public boolean setXML(oracle.dss.util.xml.ObjectNode objectNode, java.lang.String version, int reset)
node-
ObjectNodethat has the properties and their values.
version- The XML version.
reset- A constant that indicates how much to reset when XML is applied. Valid values are listed in the See Also section.
trueif XML is properly applied,
falseif the XML cannot be applied.
RESET_NONE,
RESET_XML_PROPERTIES,
RESET_EVERYTHING
|
http://docs.oracle.com/cd/E28280_01/apirefs.1111/e12063/oracle/dss/rules/Rule.html
|
CC-MAIN-2017-04
|
refinedweb
| 747
| 50.94
|
This article shows how to add a menu interface to an application from a DLL at any time.
This example was written using VC++.NET 2003 but it should translate to VC++ 6.0 easily, as much of this was developed using 6.0 early on. No managed code was harmed in the process of writing this article.
I wanted a flexible way to load a DLL into my application to test it in-house and not leave any marks when the customer got the application. Future enhancements could include targeted computer-based training modules.
This is not meant to be a be-all end-all treatise, but, rather, a springboard for extending applications after the application has been written.
There are two parts to this problem: the plug-in DLL and the target application. The target application needs to know how to call the plug-in DLL without the benefit of lib file. We accomplish this by standardizing the plug-in interface. The interface for this project is in the TestPlugin project in plugin_api.h. There are other ways of handling the interface. For instance, you could create a structure of function pointers and populate it after LoadLibrary and clean it out before FreeLibrary. In this example, you only have to store the HMODULE value. If you had more than one plug-in DLL, you would only need to store the HMODULE values and not have to carry many structures of the function pointers.
LoadLibrary
FreeLibrary
HMODULE
Let's talk about the plug-in DLL first.
The Interface: There are four methods defined publicly for this DLL in TestPlugin.def. They are InstallExtMenu and RemoveExtMenu which install and remove the menus respectively, GetExtMenuItemCount which gives the application the number of menu items installed, and GetExtMenuItem which serves to map the menu control ID to a Windows message identifier. More can be done to extend the interface but this appears to be the bare minimum to get this up and running. The file plugin_api.h handles the details of connecting to the correct DLL based on a save HMODULE value from LoadLibrary.
InstallExtMenu
RemoveExtMenu
GetExtMenuItemCount
GetExtMenuItem
CTestPluginApp: There are two ways of introducing user-defined Windows messages. One is defining a value greater than WM_USER; the other is to get a value using RegisterWindowMessage. WM_USER is useful for messages internal to an application. RegisterWindowMessage is useful when a particular message may be used across applications. We are using RegisterWindowMessage because this DLL may be servicing more than one application and because other DLLs can also use the registered messages. The registered messages are really static UINTs, attached to the CTestPluginApp object and initialized when the DLL is loaded. CTestPluginApp also contains the menu ID registered to the message ID map which is used by GetExtMenuItem to return the registered message when the menu item is selected. You will notice that the map class used is MFC's CMap<> template. My only reason for using it here is to maintain the MFC framework and not to clutter the code by importing STL. My personal preference is to use std::map over CMap.
WM_USER
RegisterWindowMessage
UINT
CTestPluginApp
CMap<>
std::map
CMap
CCommandWnd: This window receives the registered message from the target application. When the application initializes the plug-in DLL, it passes an HWND to the DLL so that the DLL can set up the menus for that window. In addition to setting up the menus, the DLL also creates a CCommandWnd window as a child to the window passed in.
CCommandWnd
HWND
Now let's talk about the target application.
CMainFrame
CView
CDocument
CWnd::GetDlgItem
CMainFrame::OnCommand
BOOL CMainFrame::OnCommand(WPARAM wParam, LPARAM lParam)
{
// if wParam translates to our internal message
// then send the internal message
UINT nSendMsg = 0 ;
if (::GetExtMenuItem(m_TestModule, (UINT)wParam, &nSendMsg) != FALSE)
{
CWnd * pWnd = GetDlgItem( CHILD_WINDOW_ID ) ;
if ( pWnd != NULL && pWnd->GetSafeHwnd() != NULL )
{
// if ::GetExtMenuItem returns TRUE and we have the child
// window then send the message to the child
return (BOOL)pWnd->SendMessage( nSendMsg, 0, 0 ) ;
}
}
return CFrameWnd::OnCommand(wParam, lParam);
}
CMainFrame::OnCmdMsg
CFrameWnd
m_bAutoMenuEnable
ON_COMMAND
ON_UPDATE_COMMAND_UI
FALSE
CFrameWnd::OnCmdMsg
BOOL CMainFrame::OnCmdMsg(UINT nID, int nCode, void* pExtra,
AFX_CMDHANDLERINFO* pHandlerInfo)
{
if ( nCode == CN_COMMAND )
{
// if nID translates to our internal message
// then enable the menu item
// otherwise, let OnCmdMsg() handle nID.
UINT nPostItem = 0 ;
// does the plugin own this menu item?
if ( ::GetExtMenuItem( m_TestModule, nID, &nPostItem ) != FALSE )
{
return TRUE ; // if yes, then enable it by returning TRUE
}
}
// otherwise, let the CFrameWnd handle it
return CFrameWnd::OnCmdMsg(nID, nCode, pExtra, pHandlerInfo);
}
I have set up a macro called _PLUGIN_ON_DEMAND in stdafx.h. If you undef this macro, the CTargetApp will try to load the DLL in its InitInstance method and unload it in the ExitInstance method. I have also included an alternate IDR_MAINFRAME menu (with the subtitle ALTMENU) that you can use when the _PLUGIN_ON_DEMAND is not defined to show how the menus can be added when the top-level Tests menu does not exist.
_PLUGIN_ON_DEMAND
CTargetApp
ExitInstance
IDR_MAINFRAME
This is version 1.0!
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
#ifndef _WIN32_WINNT // Allow use of features specific to Windows NT 4 or later.
#define _WIN32_WINNT 0x0501 // Change this to the appropriate value to target Windows 98 and Windows 2000 or later.
#endif
Andrewpeter wrote:I use VS6 version, can you translate it into VC++6.0?
f2 wrote:can i apply this to Extension DLLs?
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/script/Articles/View.aspx?aid=11640
|
CC-MAIN-2015-35
|
refinedweb
| 939
| 53.81
|
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.
On Fri, May 06, 2005 at 12:01:26AM -0400, David Edelsohn wrote: > Aldy, > > Why are > > #define MASK_PROFILE_KERNEL 0x00400000 > #define TARGET_PROFILE_KERNEL (target_flags & MASK_PROFILE_KERNEL) > > defined in linux64.h after your patch? Bug bug bug. Thanks for spotting this. The patch below fixes this. I tested it by making cc1 for ppc64-linux. I moved the comment to output_profile_hook, so we don't lose it. OK? * config/rs6000/linux64.h: Remove MASK_PROFILE_KERNEL, and TARGET_PROFILE_KERNEL. * config/rs6000/rs6000.c (output_profile_hook): Add comment to TARGET_PROFILE_KERNEL use. Index: config/rs6000/linux64.h =================================================================== RCS file: /cvs/gcc/gcc/gcc/config/rs6000/linux64.h,v retrieving revision 1.76 diff -c -p -r1.76 linux64.h *** config/rs6000/linux64.h 5 May 2005 20:54:21 -0000 1.76 --- config/rs6000/linux64.h 6 May 2005 10:53:04 -0000 *************** extern int dot_symbols; *** 206,219 **** #endif - #define MASK_PROFILE_KERNEL 0x00100000 - - /* Non-standard profiling for kernels, which just saves LR then calls - _mcount without worrying about arg saves. The idea is to change - the function prologue as little as possible as it isn't easy to - account for arg save/restore code added just for _mcount. */ - #define TARGET_PROFILE_KERNEL (target_flags & MASK_PROFILE_KERNEL) - /* We use glibc _mcount for profiling. */ #define NO_PROFILE_COUNTERS TARGET_64BIT #define PROFILE_HOOK(LABEL) \ --- 206,211 ---- Index: config/rs6000/rs6000.c =================================================================== RCS file: /cvs/gcc/gcc/gcc/config/rs6000/rs6000.c,v retrieving revision 1.820 diff -c -p -r1.820 rs6000.c *** config/rs6000/rs6000.c 5 May 2005 20:54:22 -0000 1.820 --- config/rs6000/rs6000.c 6 May 2005 10:53:12 -0000 *************** rs6000_gen_section_name (char **buf, con *** 15233,15238 **** --- 15233,15242 ---- void output_profile_hook (int labelno ATTRIBUTE_UNUSED) { + /* Non-standard profiling for kernels, which just saves LR then calls + _mcount without worrying about arg saves. The idea is to change + the function prologue as little as possible as it isn't easy to + account for arg save/restore code added just for _mcount. */ if (TARGET_PROFILE_KERNEL) return;
|
https://gcc.gnu.org/legacy-ml/gcc-patches/2005-05/msg00476.html
|
CC-MAIN-2020-50
|
refinedweb
| 335
| 52.87
|
Contiguous Cache Example#.
def data(self, QModelIndex index, int role): if (role != Qt.DisplayRole) def QVari) def cacheRows(self, from, to): for i in range(from, to + 1):
lastIndex() and
firstIndex() allows the example to determine what part of the list the cache is currently caching. These values don’t represent the indexes into the cache’s own memory, but rather a virtual infinite array that the cache represents.
By using
append() and
prepend() the code ensures that items that may be still on the screen are not lost when the requested row has not moved far from the current cache range..
def fetchRow(self, int position): return QString.number(QRandomGenerator.global().bounded(+.
Example project @ code.qt.io
|
https://doc-snapshots.qt.io/qtforpython-dev/overviews/qtcore-tools-contiguouscache-example.html
|
CC-MAIN-2022-21
|
refinedweb
| 118
| 56.25
|
»
Other JSE/JEE APIs
Author
How to connect to Linux using Java code?
Anam Ghouri
Greenhorn
Joined: Oct 24, 2011
Posts: 2
posted
Oct 24, 2011 12:57:28
0
Hi,
I need to connect to Linux using
java
code from my Windows machine and execute a linux command. Following is the code I am using:
import java.io.*; import java.net.Socket; public class RunCommand { public static void main(String args[]) { String s = null; try { // run the Unix "grep" command Socket s1=new Socket("ip address",port); PrintWriter wr=new PrintWriter(new OutputStreamWriter(s1.getOutputStream()),true); wr.println("Hi Server..."); wr.flush(); BufferedReader br=new BufferedReader(new InputStreamReader(s1.getInputStream())); System.out.println(br.readLine()); Process p = Runtime.getRuntime().exec("grep info filepath");); } } }
I am getting following error while executing this code:
java.io.IOException
: Cannot run program "grep": CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessBuilder.start(Unknown Source)
at java.lang.Runtime.exec(Unknown Source)
at java.lang.Runtime.exec(Unknown Source)
at java.lang.Runtime.exec(Unknown Source)
at macys.RunCommand.main(RunCommand.java:45)
Caused by:
java.io.IOException
: CreateProcess error=2, The system cannot find the file specified
at java.lang.ProcessImpl.create(Native Method)
at java.lang.ProcessImpl.<init>(Unknown Source)
at java.lang.ProcessImpl.start(Unknown Source)
... 5 more
exception happened - here's what I know:
I think it is not executing linux command because it is still in Windows operating system. When I include the following code in my program, i can see that is still in Windows.
String os = System.getProperty("os.name"); if ( os != null && os.startsWith("Windows")) { System.out.println ("I am windows"); } else{ System.out.println ("I am Linux"); }
Output: "I am windows"
I need to connect to Linux and execute a Linux command from Windows. How do I do it using Java code? Can somebody please help me out in this asap?
Tim Moores
Rancher
Joined: Sep 21, 2011
Posts: 2409
posted
Oct 24, 2011 13:12:00
2
Oh boy. With all due respect, I think you have quite limited understanding of network computing and remote access. Runtime.exec executes on the local machine; where else would it execute? Do you see anything in that method call that could be construed as telling it to do otherwise (if that were even possible, which it is not)?
If you want to execute commands on a remote machine, you need to open a Telnet or SSH session. Telnet has fallen out of favor since it's less secure than SSH, so the target machine may not support it; if it does, then you can use the Telnet client code that's part of the Apache Commons Net library.
If SSH is available, check out JSch; it's a Java client library for that.
Lastly, using readLine and println with sockets is bound to cause you grief. This is a good article on that issue:
Anam Ghouri
Greenhorn
Joined: Oct 24, 2011
Posts: 2
posted
Oct 24, 2011 18:03:07
0
Here is the code for someone who needs it:
import com.jcraft.jsch.*; import java.io.*; public class Exec{ public static void main(String[] arg){ try{ JSch jsch=new JSch(); String host=null; if(arg.length>0){ host=arg[0]; } else{ host="username@ipaddress"; // enter username and ipaddress for machine you need to connect } String user=host.substring(0, host.indexOf('@')); host=host.substring(host.indexOf('@')+1); Session session=jsch.getSession(user, host, 22); // username and password will be given via UserInfo interface. UserInfo ui=new MyUserInfo(); session.setUserInfo(ui); session.connect(); String command= "grep 'INFO' filepath"; // enter any command you need to execute Channel channel=session.openChannel("exec"); ((ChannelExec)channel).setCommand(command); channel.setInputStream(null); ((ChannelExec)channel).setErrStream(System.err); InputStream in=channel.getInputStream(); channel.connect(); byte[] tmp=new byte[1024]; while(true){ while(in.available()>0){ int i=in.read(tmp, 0, 1024); if(i<0)break; System.out.print(new String(tmp, 0, i)); } if(channel.isClosed()){ System.out.println("exit-status: "+channel.getExitStatus()); break; } try{Thread.sleep(1000);}catch(Exception ee){} } channel.disconnect(); session.disconnect(); } catch(Exception e){ System.out.println(e); } } public static class MyUserInfo implements UserInfo{ public String getPassword(){ return passwd; } public boolean promptYesNo(String str){ str = "Yes"; return true;} String passwd; public String getPassphrase(){ return null; } public boolean promptPassphrase(String message){ return true; } public boolean promptPassword(String message){ passwd="password"; // enter the password for the machine you want to connect. return true; } public void showMessage(String message){ } } }
I agree. Here's the link:
subject: How to connect to Linux using Java code?
Similar Threads
Pesky Runtime method
Unix With Java....
executing linux commands through java socket
Compile and create instance of a new java program from existing java program
Runtime.getRuntime( ).exec( command ); Win vs Unix
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/556707/java/java/connect-Linux-Java-code
|
CC-MAIN-2015-35
|
refinedweb
| 820
| 50.63
|
).
1. the calling routine knows the correct format to send it in. 2. the 'right format' is a hash containing a set of records to convert as produced by Mozilla::Mork 3. this is not a full file conversion - some fields are missing, if there is no corresponding field, also, some fields are created from, or imported from others 4. You want to print the results so you can capture the output to a file of of your choice.
It turns out that the import engine that comes with the Blackberry Desktop (I tested with version 4.0) will happily import duplicates, even if you tell it not to; so I suggest this for a bulk load only, rather than multiple import runs. For a ongoing conversion I suggest using the Sync with Outlool/Outlook Express and using Dawn (See Also, below) to manage the combining and conversion. I might write a conversion routine, but so far I've shaved my particular yak.
Also, the correct place for this code, if I am being honest is an addtion to the Mail::Addressbook::Convert suite. However, my time to work on this is limited and by releasing in this way I can get it 'out there' for others to use. If someone wants to incorporate it in athe abobe namespace, good for them. I plan to, but no idea when I'll actually get round to it..
new()
create the new() OO Object. Don't mind the man behind the curtain..
PrintBlackberryHeaders()
print the blackberry field headers - iterate over ever instance and print it, basically.
ReturnBlackberryHeaders()
return a list of the Header fields that Blackberry uses in its .CSV import file
StreamConvert()
Convert each record passed, from each line in an array expects a hash reference which contains a set of data to convert This only converts one set of records at a time, to convert the entire file, you must call it repeatedly. Returns a single scalar containing the CSV record set of the converted record, along with printing the record to STDOUT. #TODO seperate the printing and returning routines.
|
http://search.cpan.org/~kript/Convert-Addressbook-Mozilla2Blackberry-0.01/lib/Convert/Addressbook/Mozilla2Blackberry.pm
|
CC-MAIN-2015-48
|
refinedweb
| 350
| 70.13
|
changelog2x - Transform ChangeLogML files using XSLT stylesheets
changelog2x --format html --css /styles/changelog.css < Changes.xml
This script is a simple example of transforming ChangeLogML mark-up into user-friendly formats. The application itself does very little work; it's purpose is mainly to process and handle the command-line options, then delegate the actual work to the App::Changelog2x (App::Changelog2x) module.
changelog2x installs with a set of sample XSLT stylesheets, as well as the XML Schema definition of ChangeLogML. These stylesheets allow conversion to valid XHTML (with comprehensive use of CSS classes for styling), a plain-text format based on typical
ChangeLog files in use, and a variety of snippets useful for inclusion or embedding within other documents.
There are two distinct groups of options: application options that are used by changelog2x, and stylesheet options that are passed through to the XSLT processor for use/recognition by the processor itself. The latter group of options control such things as URLs for CSS, sorting order, etc.
These options control the behavior of the application itself, and are not passed to the actual stylesheet processing:
If this option is passed, it specifies the ChangeLogML XML file to process. If the value of this string is
-, or if the option is not passed at all, then the STDIN file-handle is used.
If this option is passed, it specifies the file to write the transformed content out to. If the value of this string is
-, or if the option is not passed at all, then the STDOUT file-handle is used.
This option specifies an alternate format pattern for the DateTime
strftime method to use when formatting the dates in the
<release> tags. Note that DateTime->strftime formatting is sensitive to your locale setting. The format is also used for those output templates that include "generated on" comments.
This option may be abbreviated as
-f for convenience.
Specifies the XSLT template
templateroot option, next).
If the parameter does not match the pattern, it is assumed to be a file name. If it is not an absolute path, it is looked for under the template root directory. As a special case, if the path starts with a
. character, it is not converted to absolute.
Once the full path and name of the file has been determined, if it cannot be opened or read an error is reported.
This option may be abbreviated as
-t for convenience. The default value of this option is
html. See "Template Option Values" for the list of templates/stylesheets provided with the application, and what they produce.
Specifies an alternative root directory for locating the XSLT templates (stylesheets). By default, the root directory is a sub-directory called
changelog2x in the same directory that the App::Changelog2x class-file is installed into. A directory specified with this option is added to the list of paths that get searched, so you can specify a directory that (for example) only provides a template for
text, while still having the rest of templates be findable in the default directory.
If you do add a path, you can also take advantage of the expansion of "string" arguments to the
template option (see above) into full file names, if you have files that fit that pattern in your chosen template directory.
This option may be abbreviated as
-tr for convenience. It may be specified multiple times, with the search-order of the directories being the same order they're given on the command-line.
This option allows the user to specify additional
<head>-block content as the contents of a file, in lieu of the
headcontent option below, under "Stylesheet Options". See the documentation of that option for more detail of its role. This option makes it easier to specify large and/or complex values that would otherwise be difficult or impossible to pass on a command-line.
If both this and
headcontent are passed, this option takes precedence. If the file name specified cannot be opened or read, an error is reported.
This option may be abbreviated as
--head for convenience.
This option allows the user to specify additional
<body>-block content as the contents of a file, in lieu of the
bodycontent option below, under "Stylesheet Options". See the documentation of that option for more detail of its role. This option makes it easier to specify large and/or complex values that would otherwise be difficult or impossible to pass on a command-line.
If both this and
bodycontent are passed, this option takes precedence. If the file name specified cannot be opened or read, an error is reported.
This option may be abbreviated as
--body for convenience.
The following set of options are actually used by the XSLT processor (XML::LibXSLT in this case), and are not directly used by changelog2x at all. They are passed in to the transformation phase of the processing after being converted to XPath-style strings.
Some of the options only apply to certain of the stylesheets. This is denoted by listing the templates that the option affects in square brackets after the option-type.
This is a boolean option. If given, it disables the generation of the shortcut-links at the top of the full-page XHTML rendition of the ChangeLogML file.
A string that specifies which release-versions should be processed. By default, all
<release> blocks are processed. If this parameter is given, it acts as a sort of filter to limit the set of releases. The acceptable values for the string are:
This is the default value; all release blocks are processed in the sorted order.
If this value is given, then the first release block (based on sorting order) is processed and all others are ignored.
Generally, the value is assumed to be a comma-separated list of versions as defined by the
version attribute of the
<release> tag. As each release block is considered, if the version is present in the user-provided list then the release block is processed. There is (currently) no sort of wildcarding or regular-expression matching provided for the list.
Any string that is not
first or
all is assumed to be a list of versions. If it badly-formed, it will likely not match any of the release blocks, and none will be processed.
This parameter should be a string whose value is one of
ascending or
descending. It controls the order in which the release blocks are sorted by their
date attributes. The default is
descending, which places the newest version at the top of the resulting document.
Date-sorting is used as proper sorting of version strings is usually problematic. Dates expressed in ISO 8601 will sort correctly when sorted as text. The only caveat is that two releases close to each other in different timezone-offsets could sort incorrectly, since the sorting would key off of the hours portion before taking the offsets into consideration. This is a limitation of XSL's sorting capabilities.
For the
htmlnewest and
htmlversion output templates, the overall XHTML content is much smaller than the other XHTML-oriented stylesheets. To this end, this option allows the user to specify an explicit CSS style-name to give to the containing elements that are generated. In the case of the
htmlnewest stylesheet, this is a
<div>. In the case of
htmlversion, it is a
<span>. See the documentation below ("Template Option Values") for the default class names for each of the templates.
Specifies a URL to be used as the basic CSS stylesheet when rendering a complete XHTML document. If given, a
<link> element is created in the document's
<head> section with the
rel attribute set to
stylesheet, the
type attribute set to
text/css and the
href attribute set to the value of this parameter. No checking is done on the URL, and no constraints are applied. The URL may be absolute, relative, etc.
The only distinction between this parameter and the next one, is that this one will occur first in the
<head> block, and thus be first in the CSS cascade model.
As above, but this parameter is used to allow a second URL to be specified, one that will follow the previous one in the CSS cascade order. This allows the user to have a "main" stylesheet with font, spacing, etc. declarations while also using this option to select between color schemes for text, backgrounds, etc. (hence the choice of
color as the option name).
Like the two CSS-related options above, this allows the specification of a URL to be included in the document head-section. Unlike the previous, this URL is assumed to refer to a Javascript resource. As such, it triggers the generation of a
<script> element with a
type attribute set to
text/javascript and a
href attribute set to the value of this parameter.
This element occurs after any content specified in the
headcontent (or appliction option
headcontentfile) is included in the output. Thus, it can safely refer to any functions, etc. defined in that content.
These options allow for the user to provide arbitrary content for the
<head> and/or
<body> sections of the XHTML document, when rendering a full document with the
html template.
Realizing that the generalized stylesheets provided by this package won't fit every user's needs, these options are a sort of "wildcard" pass to include anything that can't be achieved by the existing stylesheet-targeted parameters. Note that as command-line arguments, they are limited as to how complex the values can be. Hence the
headcontentfile and
bodycontentfile options, which are handled by the application before processing is handed off to XML::LibXSLT. Also note that the file-oriented options to the application will override any values passed in via either of these options.
Allow for the user to pass additional parameters to the XSLT processing phase beyond those defined here. If you have written your own XSLT stylesheets to use with the
template and/or
templateroot options, you may also have need for your own XSLT parameters. You may provide as many of these as you wish with this option. Each occurrence should have a value of the form,
name=value, where
name is the name the parameter will have when passed to the XSLT processor, and
value will be the content of the parameter.
This application installs with (at present) nine pre-defined stylesheets available for use. These are the potential values of the
template option to the application (the default being
html). The stylesheets fall into two groups: XHTML and plain-text.
These templates produce content that is either complete, valid XHTML, or snippets that are conformant and should be easily included in larger documents:
This is the default stylesheet, which generates a complete XHTML document. The
<body> tag and all its children will have a CSS classes associated with them that indicate the hierarchy to some extent, and allow for comprehensive styling via CSS.
The structure of the document is basically:
HEAD headcontent parameter <title> CSS parameters javascript parameter BODY bodycontent parameter <h1> containing same text as <title> <div> containing abstract (top-most <description> block) ToC-style links <hr> <div> containing one or more release blocks: <div> wrapping one release: <span> containing subproject name (if release is from a subproject) <span> containing version number <span> containing release date <p> containing release-level <description>, if present <div> containing one or more change blocks: <div> wrapping one change: <span> containing transaction revision, if any <ul> containing one or more files: <li> containing one file, possibly with revision and/or action information <p> containing the change-level <description> <hr> <div> containing diagnostics/credits data
This doesn't include most of the viewer-visible content that doesn't come directly from the input file (things like labels, etc.), except for the two horizontal-rule elements, which contribute to the overall visual structure. Every element referred to above (and some that are implied, but not explicitly listed) is given a CSS class name. See "CSS Class Hierarchy" for details on the class names and where they are used.
This stylesheet renders a structure similar to the above, except that it only produces the
<div element that contains the release blocks. Referring to the structure above, this is the
<div> that immediately follows the first
<hr>. An XML comment is included with some information on the version of the stylesheet used, as well as tools. However, no visible content is included (i.e., no "footer" as follows the second
<hr> in the layout above).
Like the previous stylesheet, this produces an XHTML fragment suitable for inclusion in a larger document. However, it differs in that the outermost container is not a
<div>, but instead a
<ul>. The containing
<ul> is assigned a different CSS class than the
<ul> containers used for change-blocks. Each release-block is rendered within one
<li> child-element (which is also assigned a distinct class from the similar elements used within the change-blocks). This stylesheet also includes some diagnostic information as an XML/XHTML comment, but does not include it in any visible elements.
As with the previous two stylesheets, this produces an XHTML fragment for inclusion in other documents. In this case, the outermost container element is a
<dl>. The structure of this template's output is also somewhat different: where the previous two rendered each release-block in the same manner as the whole-document stylesheet, this stylesheet moves the pseudo-heading line that contains the word "Version" followed by the release's version number into the
<dt> element, and renders the remainder of the release block in the
<dd> element. As with the others, the stylesheet also includes some diagnostic information as an XML/XHTML comment, but does not include it in any visible elements.
This is a special variation of the
div stylesheet, that contains exactly one release-block, that of the most-recent release (as sorted by date). The outermost container is a
<div> element whose CSS class dfaults to the same class used for the top-most container in the other templates. However, the user may specify a different CSS class with the
class stylesheet parameter (see "Stylesheet Options"), if they wish to have this XHTML fragment adhere to styles defined in a different CSS stylesheet. Diagnostic information is included within a comment.
This is similar to the previous stylesheet, but only renders a single
<span> tag containing the version-string of the newest release (as sorted by date). The element is assigned a CSS class whose name fits within the general naming scheme of other CSS classes used in these templates. As with the previous, the class can be specified by the user via the
class parameter.
For all varieties of XHTML output, any elements in
<description> blocks that belong to the namespace set aside by the W3C for XHTML () are copied into the output verbatim, except that a
class attribute is added to allow the user to include CSS style information with the rest of the changelog-related CSS declarations. If the element already has a
class attribute, it is copied over and the new class name added at the end of the existing content. The new class name is created by appending the tag name to the string
changelog-html-. Thus, an element
p gets the class
changelog-html-p. For example (assuming that the
xhtml prefix has been declared to reference the XHTML namespace), the following content:
<xhtml:aperl.org</xhtml:a>
yields this output:
<a href="" class="changelog-html-a">perl.org</a>
The following content (which already has a
class attribute):
<xhtml:spanBold Span</xhtml:span>
yields:
<span class="bold changelog-html-span">Bold Span</span>
No other foreign XML tags are copied over, at present. Allowance has been made for future extension with information such as version-control system specification, hosting information, Dublin Core metadata, etc.
These templates produce plain-text output:
This template produces output that comes very close to the de-facto standard plain-text "Changelog" so familiar to open-source projects. After the project name and in-set description (formatted like a document abstract, left-justified and centered with regards to an 80-column page), the releases are presented in the sorted order (possibly filtered by the
versions parameter).
Each release starts with a line like this:
0.19 Monday October 20, 2008, 02:00:00 AM -0700
The version string is left-justified, followed by a single tab-stop character and the formatted date (see the
format application option to control the formatting of the dates).
Following the "header" for a release, each
<change> element is presented (in order) in a format roughly like this:
[ <transaction-revision number> ] * FILE-1 [ <revision number> [, <action label> ] ] ... Change <description> text
If the change-block contains a
<fileset> that itself has a
revision attribute, the first line in the example above is produced, identifying this as the revision identifier for the transaction as a whole (similar to how systems like Subversion group commits of multiple files at once into a "transaction"). Then, all the files listed in the change are enumerated as a bulleted-list. For each file, if there is a
revision attribute on the
<file> element, it is displayed after the path. If the file has an
action attribute, a parenthetical action-label is further appended. Once all files have been listed, the contents of the
<description> element are displayed, indented 8 spaces and word-wrapped to a width of 70 columns.
At the end of the output, several lines are added with
# in the first two columns (pseudo-comment notation) that identify the revision of the XSLT stylesheet used, the date/time when it was processed, and the tools used to do the processing.
This template is similar to the
htmlnewest listed earlier, except that it generates plain-text. It outputs the newest revision as a single block, using the same format and layout as described above for
text. However, it does not output the pseudo-comments at the end.
This template is the plain-text counterpart to
htmlversion. It outputs just the version-string of the most-recent release. It does not output a newline character, so that the result of this can be saved to a file that can be later inserted into other files without bringing in a potentially-unwanted line-break. (As opposed to the output of the
textnewest stylesheet, above, which ends in a fully-formatted paragraph for which an ending newline makes sense.)
All stylesheets that generate plain-text will strip XHTML elements out of the output while retaining the text content they have. Thus, a construct like the example used above:
<xhtml:aperl.org</xhtml:a>
will output as plain text simply:
perl.org
Null elements such as
<br /> or
<p></p> will not add anything to the output.
As with the XHTML templates, XML tags that are not part of ChangeLogML or XHTML are removed completely. Their present is tolerated, however, to allow for future integration of additional metadata.
To illustrate the hierarchy of classes used to allow CSS styling, the diagram from earlier is revisited and revised:
<body class="changelog"> <h1 class="changelog-title" /> <div class="changelog-abstract" /> <div class="changelog-toc-div"> [*] <a class="changelog-toc-link" /> </div> <hr class="changelog-divider" /> <div class="changelog-container-div"> <div class="changelog-release-div"> <span class="changelog-subproject-heading"> [*] <span class="changelog-release-heading"> <a class="changelog-toc-link" /> (link back to top) [*] <span class="changelog-release-date"> <span class="changelog-date" /> </span> <p class="changelog-release-para" /> [*] <div class="changelog-release-changes-container"> <div class="changelog-release-change"> <span class="changelog-transaction-revision" /> [*] <ul class="changelog-release-change-ul"> <li class="changelog-release-change-li"> <tt class="changelog-filename" /> <span class="changelog-file-revision" /> [*] <span class="changelog-release-file-action" /> [*] </li> <p class="changelog-release-change-para" /> </div> </div> </div> <hr class="changelog-divider" /> <div class="changelog-footer"> <p class="changelog-credits"> <span class="changelog-credits-revinfo" /> <span class="changelog-credits-date" /> <span class="changelog-credits-toolchain" /> </p> </div> </body>
Those elements marked with an asterisk (
*) to their right side might not be present. In some cases (the table-of-contents), they may be opted-out by the user. In other cases they are only present if there is data to be contained (that is, empty container-tags are not rendered).
The file
changelogml.css that comes with this distribution implements almost all of these classes, and can serve as a reference.
App::Changelog2x, XML::LibXSLT, ().
|
http://search.cpan.org/~rjray/App-Changelog2x/bin/changelog2x
|
CC-MAIN-2017-04
|
refinedweb
| 3,426
| 51.38
|
public class Solution { public ListNode reverseBetween(ListNode head, int m, int n) { if(head == null || head.next == null || m == n){ return head; } ListNode mnode = head; //find m node: the node which rotation will start at ListNode mprev = null; // previous node to mnode if(m == 1){ //if m== 1 that means rotation node is at beginning of list mnode = head; mprev = null; } else{ for(int i = 1; i < m; i++){ mprev = mnode; mnode = mnode.next; } } ListNode nnode = mnode; //n node which is the node that rotation will end at ListNode nnext = null; for(int i = m; i < n; i++){ //loop to find nnode and the node next to it nnode = nnode.next; nnext = nnode.next; } int j = n-m; while(j > 0){ //loop to start reversing ListNode temp2 = null; //if condition to manage head, if mprev is null meaning m starts at head of the list, we need to keep changing head position with every loop if(mprev == null){ head = mnode.next; temp2 = mnode.next; } else{ // else head position will not change mprev.next = mnode.next; temp2 = mprev.next; } ListNode temp = mnode; nnode.next = temp; temp.next = nnext; nnext = mnode; mnode = temp2; j--; } return head; } }
ACCEPTED JAVA solution WITHOUT dummy head, is it good enough?
Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
|
https://discuss.leetcode.com/topic/10751/accepted-java-solution-without-dummy-head-is-it-good-enough
|
CC-MAIN-2017-43
|
refinedweb
| 220
| 79.8
|
Today.
This is a great way to get community feedback on enhancements to the BCL. Thanks for including the wider .NET community in the design of these core components!
I’m particularly happy to see the new Long Path API. It was far too easy to create paths that were inaccessible using the classes in the System.IO namespace. I’ll be sure to give it a test drive!
I want you people to send me bcl codeplex site launch for me in order to use it to improve my web site.
Good news! But why don’t you use Google Code instead of Codeplex? 😉
This sounds like misuse of CodePlex. CodePlex is for open source projects.
If you don’t want to take submissions, you should use Code Gallery (also a Microsoft property) instead.
@Mark, being open source and taking submissions are two different things. No matter the license, all OSS projects have the right to not accept submissions. Also, the code on the BCL CodePlex site is licensed under MS-PL, which is approved by OSI:
Good approach for community feedback. THUMBS UP
@Paul Stovell: My mistake – sorry about being an ass. Feel free to delete my comments.
It’s about time!
|
https://blogs.msdn.microsoft.com/bclteam/2010/03/30/bcl-codeplex-site-launch/
|
CC-MAIN-2017-09
|
refinedweb
| 204
| 76.62
|
First solution in Clear category for Count Consecutive Summers by Sillte
""" Policy
a + (a+1) + ... + (a + (a + t - 1)) = 1 / 2 * t * (2 * a + t - 1)
Seartch the number of a and t under the condition where a >= 1, t >= 1
and both of a and t are integer.
The calculation time is O(N).
I think efficient calculation of divisors are necessary to improve speed.
"""
def count_consecutive_summers(num):
def _condition(t):
if 2 * num % t != 0:
return False
u = 2 * num // t
a = (u - (t - 1)) / 2
return int(a) == a and 1 <= a
return sum(_condition(t) for t in range(1, 2 * num +1))
Dec. 9, 2018
Forum
Price
Global Activity
ClassRoom Manager
Leaderboard
Coding games
Python programming for beginners
|
https://py.checkio.org/mission/count-consecutive-summers/publications/Sillte/python-3/first/share/737bccac94e03ee15a7af52e7f0aa3ed/
|
CC-MAIN-2021-31
|
refinedweb
| 123
| 63.29
|
On Wednesday 24 October 2007 21:12, Kay Sievers wrote:> On 10/24/07, Nick Piggin <nickpiggin@yahoo.com.au> wrote:> > On Tuesday 23 October 2007 10:55, Takenori Nagano wrote:> > > Nick Piggin wrote:> > > > One thing I'd suggest is not to use debugfs, if it is going to> > > > be a useful end-user feature.> > >> > > Is /sys/kernel/notifier_name/ an appropriate place?> >> > I'm curious about the /sys/kernel/ namespace. I had presumed that> > it is intended to replace /proc/sys/ basically with the same> > functionality.>> It was intended to be something like /proc/sys/kernel/ only.Really? So you'd be happy to have a /sys/dev /sys/fs /sys/kernel/sys/net /sys/vm etc? "kernel" to me shouldn't really imply thestuff under the kernel/ source directory or other random stuffthat doesn't fit into another directory, but attributes that aredirectly related to the kernel software (rather than directlyassociated with any device).> > I _assume_ these are system software stats and> > tunables that are not exactly linked to device drivers (OTOH,> > where do you draw the line? eg. Would filesystems go here?>> We already have /sys/fs/ ?>> > Core network algorithm tunables might, but per interface ones probably> > not...).>> We will merge the nonsense of "block/", "class/" and "bus/" to one> "subsystem". The block, class, bus directories will only be kept as> symlinks for compatibility. Then every subsystem has a directory like:> /sys/subsystem/block/, /sys/subsystem/net/ and the devices of the> subsystem are in a devices/ directory below that. Just like the> /sys/bus/< name>/devices/ layout looks today. All subsystem-global> tunables can go below the /sys/subsystem/<name>/ directory, without> clashing with the list of devices or anything else.Makes sense.> > I don't know. Is there guidelines for sysfs (and procfs for that> > matter)? Is anyone maintaining it (not the infrastructure, but> > the actual content)?>> Unfortunately, there was never really a guideline.>> > It's kind of ironic that /proc/sys/ looks like one of the best> > organised directories in proc, while /sys/kernel seems to be in> > danger of becoming a mess: it has kexec and uevent files in the> > base directory, rather than in subdirectories...>> True, just looking at it now, people do crazy things like:> /sys/kernel/notes, which is a file with binary content, and a name> nobody will ever be able to guess what it is good for. That should> definitely go into a section/ directory. Also the VM stuff there> should probably move to a /sys/vm/ directory along with the weird> placed top-level /sys/slab/.Top level directory IMO should be kept as sparse as possible. Ifyou agree to /sys/mm for example, that's fine, but then slab shouldgo under that. (I'd prefer all to go underneath /sys/kernel, but...).It would be nice to get a sysfs content maintainer or two. Justhaving new additions occasionally reviewed along with the rest ofa patch, by random people, doesn't really aid consistency. Would itbe much trouble to ask that _all_ additions to sysfs be accompaniedby notification to this maintainer, along with a few line description?(then merge would require SOB from said maintainer).For that matter, this should be the case for *all* userspace APIchanges (kernel-user-api@vger.kernel.org?)-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
|
http://lkml.org/lkml/2007/10/24/655
|
CC-MAIN-2017-34
|
refinedweb
| 573
| 56.76
|
Opened 9 years ago
Closed 9 years ago
Last modified 9 years ago
#6565 closed (wontfix)
delete() method is not called on related objects
Description
Consider the following models.py:
from django.db import models class A(models.Model): pass def delete(self): print 'deleting a' super(A, self).delete() class B(models.Model): a = models.ForeignKey(A) def delete(self): print 'deleting B' super(B, self).delete()
Test case:
In [1]: from test import models In [2]: a = models.A.objects.create() In [3]: b = models.B.objects.create(a=a) In [4]: models.A.objects.all() Out[4]: [<A: A object>] In [5]: models.B.objects.all() Out[5]: [<B: B object>] In [6]: a.delete() deleting a In [7]: models.A.objects.all() Out[7]: [] In [8]: models.B.objects.all() Out[8]: []
When calling "a.delete()", the delete method of model B should be called, so the expected output is:
In [6]: a.delete() deleting b deleting a
Change History (4)
comment:1 Changed 9 years ago by
comment:2 Changed 9 years ago by
I agree that not calling delete is more efficient. It looks like your suggestion to use signals is the best solution right now.
In my case, I need to call a function ALWAYS when an object is deleted. This requires me to write the following code for each affected model, which doesn't look that pretty:
class B(models.Model): a = models.ForeignKey(A) def pre_delete(self): print 'deleting b' def delete_b(sender, instance, signal, *args, **kwargs): instance.pre_delete() dispatcher.connect(delete_b, signal=signals.pre_delete, sender=B)
What about providing a simpler way to accomplish this?
comment:3 Changed 9 years ago by
Beauty's in the eye of the beholder, but there's really no problem with the above code. Particularly as your delete_b method only has to be written once. You register the signal handler and write a handler function that does whatever you want. It can't get much simpler than that, unless Django automatically registered the signal handler for you (which isn't going to happen -- signal dispatching is not free; if you want it, you set it up and everybody else doesn't pay the penalty). You have to write pre_delete in any case (whether it's called pre_delete or delete doesn't really matter here) and delete_b will work for all instances, regardless of class type. So, basically, you need to write one signal registration call per model; one line of code. Not a huge extra burden that I can see.
What sort of simplification are you hoping for? Taking into account the above (no automatic signal registration), if you have a suggestion, please, let's hear it.
comment:4 Changed 9 years ago by
I was hoping to do something like this:
class B(models.Model): a = models.ForeignKey(A) @signal(signals.pre_delete) def pre_delete(self): print 'deleting b'
Unfortunately this doesn't work as the decorator doesn't know to which class the function belongs.
It's not clear that the delete() method on B *should* be called. The important thing is that the related objects are deleted, which works correctly. This isn't a subclassing relationship, so there's not a clear expectation that the corresponding method is called to actually do the deletion (it turns out to be less efficient than how we work now).
Needs some further thought, but I suspect this probably isn't a good idea. It introduces side-effects that could be difficult to control. If you want to do something when a related object is deleted, use the "delete" signal.
|
https://code.djangoproject.com/ticket/6565
|
CC-MAIN-2017-26
|
refinedweb
| 604
| 66.94
|
Created on 2013-04-09 13:51 by bkabrda, last changed 2019-05-17 12:01 by vstinner. This issue is now closed.
When compiling Python 3.3.1, I noticed that some variables like LDFLAGS or CFLAGS in sysconfig have some flags multiple times. (Which BTW breaks distutils.tests.{test_sysconfig_compiler_vars,test_sysconfig_module}) This is caused by interpretation of Makefile in sysconfig._parse_makefile(), which seems to evaluate the variables in Makefile - but some variables in Makefile are formed by expanding env variables multiple times, e.g.:
PY_LDFLAGS= $(CONFIGURE_LDFLAGS) $(LDFLAGS)
CONFIGURE_LDFLAGS= @LDFLAGS@
so when doing the build from scratch with configure & make, PY_LDFLAGS gets the content of LDFLAGS twice (as far as I remember autotools...), CFLAGS gets expanded like 5 times at least.
I think that this is not the correct behaviour, but not sure, maybe I'm doing something wrong.
Thanks.
There definitely are configurations where some values do get duplicated in CFLAGS and LDFLAGS. In my experience this is generally harmless for builds but, as you point out, it can break tests that expect particular values. It would be nice to clean this up.
I'm attaching a patch that I'm currently using to solve this. It works, but it's a bit aggressive - in the sense that it only adds a string to the sysconfig variable iff this string is not a substring of current variable value. So it may corrupt some values, e.g. it wouldn't add "python" to variable if that variable already had "/usr/lib/python". But it seems to get all the values just fine for me.
Another solution may be to make the test more relaxed and regard the value returned by sysconfig.get_config_var() as a _set_ of shell tokens, whose elements may occur more than once, e.g.
def test_sysconfig_module(self):
import sysconfig as global_sysconfig
from shlex import split
self.assertEqual(
set(split(global_sysconfig.get_config_var('CFLAGS'))),
set(split(sysconfig.get_config_var('CFLAGS'))))
self.assertEqual(
set(split(global_sysconfig.get_config_var('LDFLAGS'))),
set(split(sysconfig.get_config_var('LDFLAGS'))))
The patch doesn't look good to me. If the value contains "-lfoo-lbar $(name)" then substituting name="-lfoo" or name="-lbar" doesn't work.
I don't think that the attached patch is correct. See attached install.diff: difference without/with 00178-dont-duplicate-flags-in-sysconfig.patch on Python installed in /usr/bin/python3.
Example of bug:
'TESTRUNNER': 'LD_LIBRARY_PATH=/builddir/build/BUILD/Python-3.7.1/build/optimized '
- './python '
- '/builddir/build/BUILD/Python-3.7.1/Tools/scripts/run_tests.py',
+ './python /Tools/scripts/run_tests.py',
The /Tools directory doesn't exist :-/
> I think that this is not the correct behaviour, but not sure, maybe I'm doing something wrong.
Technically, it's perfectly fine to pass the same flag multiple times. It's common to pass -O0 -Og to gcc for example: gcc uses the last -O option value (which overrides the previous ones).
--
This patch is used in the python3 package of Fedora:
Patch added by:
commit 58f477b403222ea6c13d5d7358551b606cddc0f8
Author: Bohuslav Kabrda <bkabrda@redhat.com>
Date: Wed Apr 10 14:30:09 2013 +0200.
build.diff: Difference without/with the patch on Python build from source.
Example:
- CPPFLAGS = "-I. -I./Include"
+ CPPFLAGS = "-I. -I/Include"
This change is wrong: /Include directory doesn't exist.
Another example:
- LIBPL = "/usr/local/lib/python3.8/config-3.8dm-x86_64-linux-gnu"
+ LIBPL = "/usr/local/lib/python3.8/config-dm-x86_64-linux-gnu"
I don't understand why "3.8" is removed from the path.
The patch is wrong. I'm not sure when/how C flags are duplicated. Anyway, it seems like the issue is somehow outdated or even gone, so I close the issue.
|
https://bugs.python.org/issue17679
|
CC-MAIN-2021-49
|
refinedweb
| 606
| 60.21
|
-- | Constructing and checking whether rewrite rules are valid module DDC.Core.Transform.Rewrite.Rule ( -- * Binding modes BindMode (..) , isBMSpec , isBMValue , RewriteRule (..) , NamedRewriteRule -- * Construction , mkRewriteRule , checkRewriteRule , Error (..) , Side (..)) where import DDC.Core.Transform.Rewrite.Error import DDC.Core.Transform.Reannotate import DDC.Core.Transform.TransformX import DDC.Core.Exp import DDC.Core.Pretty () import DDC.Core.Collect import DDC.Core.Compounds import DDC.Type.Pretty () import DDC.Type.Env (KindEnv, TypeEnv) import DDC.Base.Pretty import Control.Monad import qualified DDC.Core.Analysis.Usage as U import qualified DDC.Core.Check as C import qualified DDC.Core.Collect as C import qualified DDC.Core.Transform.SpreadX as S import qualified DDC.Type.Check as T import qualified DDC.Type.Compounds as T import qualified DDC.Type.Env as T import qualified DDC.Type.Equiv as T import qualified DDC.Type.Predicates as T import qualified DDC.Type.Subsumes as T import qualified DDC.Type.Transform.SpreadT as S import qualified Data.Map as Map import qualified Data.Maybe as Maybe import qualified Data.Set as Set import qualified DDC.Type.Env as Env -- | A rewrite rule. For example: -- -- @ RULE [r1 r2 r3 : %] (x : Int r1) -- . addInt [:r1 r2 r3:] x (0 [r2] () -- = copyInt [:r1 r3:] x -- @ data RewriteRule a n = RewriteRule { -- | Variables bound by the rule. ruleBinds :: [(BindMode, Bind n)] -- | Extra constraints on the rule. -- These must all be satisfied for the rule to fire. , ruleConstraints :: [Type n] -- | Left-hand side of the rule. -- We match on this part. , ruleLeft :: Exp a n -- | Extra part of left-hand side, -- but allow this bit to be out-of-context. , ruleLeftHole :: Maybe (Exp a n) -- | Right-hand side of the rule. -- We replace the matched expression with this part. , ruleRight :: Exp a n -- | Effects that are caused by the left but not the right. -- When applying the rule we add an effect weakning to ensure -- the rewritten expression has the same effects. , ruleWeakEff :: Maybe (Effect n) -- | Closure that the left has that is not present in the right. -- When applying the rule we add a closure weakening to ensure -- the rewritten expression has the same closure. , ruleWeakClo :: [Exp a n] -- | References to environment. -- Used to check whether the rule is shadowed. , ruleFreeVars :: [Bound n] } deriving (Eq, Show) type NamedRewriteRule a n = (String, RewriteRule a n) instance (Pretty n, Eq n) => Pretty (RewriteRule a n) where ppr (RewriteRule bs cs lhs hole rhs _ _ _) = pprBinders bs <> pprConstrs cs <> ppr lhs <> pprHole <> text " = " <> ppr rhs where pprBinders [] = text "" pprBinders bs' = foldl1 (<>) (map pprBinder bs') <> text ". " pprBinder (BMSpec, b) = text "[" <> ppr b <> text "] " pprBinder (BMValue _, b) = text "(" <> ppr b <> text ") " pprConstrs [] = text "" pprConstrs (c:cs') = ppr c <> text " => " <> pprConstrs cs' pprHole | Just h <- hole = text " {" <> ppr h <> text "}" | otherwise = text "" -- BindMode ------------------------------------------------------------------- -- | Binding level for the binders in a rewrite rule. data BindMode -- | Level-1 binder (specs) = BMSpec -- | Level-0 binder (data values and witnesses) | BMValue Int -- ^ number of usages deriving (Eq, Show) -- | Check if a `BindMode` is a `BMSpec`. isBMSpec :: BindMode -> Bool isBMSpec BMSpec = True isBMSpec _ = False -- | Check if a `BindMode` is a `BMValue`. isBMValue :: BindMode -> Bool isBMValue (BMValue _) = True isBMValue _ = False -- Make ----------------------------------------------------------------------- -- | Construct a rewrite rule, but do not check if it's valid. -- -- You then need to apply 'checkRewriteRule' to check it. -- mkRewriteRule :: Ord n => [(BindMode,Bind n)] -- ^ Variables bound by the rule. -> [Type n] -- ^ Extra constraints on the rule. -> Exp a n -- ^ Left-hand side of the rule. -> Maybe (Exp a n) -- ^ Extra part of left, can be out of context. -> Exp a n -- ^ Right-hand side (replacement) -> RewriteRule a n mkRewriteRule bs cs lhs hole rhs = RewriteRule bs cs lhs hole rhs Nothing [] [] -- Check ---------------------------------------------------------------------- -- |. -- checkRewriteRule :: (Ord n, Show n, Pretty n) => C.Config n -- ^ Type checker config. -> T.Env n -- ^ Kind environment. -> T.Env n -- ^ Type environment. -> RewriteRule a n -- ^ Rule to check -> Either (Error a n) (RewriteRule (C.AnTEC a n) n) checkRewriteRule config kenv tenv (RewriteRule bs cs lhs hole rhs _ _ _) = do -- Extend the environments with variables bound by the rule. let (kenv', tenv', bs') = extendBinds bs kenv tenv let csSpread = map (S.spreadT kenv') cs -- Check that all constraints are valid types. mapM_ (checkConstraint config kenv') csSpread -- Typecheck, spread and annotate with type information (lhs', _, _, _) <- checkExp config kenv' tenv' Lhs lhs -- If the extra left part is there, typecheck and annotate it. hole' <- case hole of Just h -> do (h',_,_,_) <- checkExp config kenv' tenv' Lhs h return $ Just h' Nothing -> return Nothing -- Build application from lhs and the hole so we can check its -- type against rhs let Just a = takeAnnotOfExp lhs let lhs_full = maybe lhs (XApp a lhs) hole -- Check the full left hand side. (lhs_full', tLeft, effLeft, cloLeft) <- checkExp config kenv' tenv' Lhs lhs_full -- Check the full right hand side. (rhs', tRight, effRight, cloRight) <- checkExp config kenv' tenv' Rhs rhs -- Check that types of both sides are equivalent. let err = ErrorTypeConflict (tLeft, effLeft, cloLeft) (tRight, effRight, cloRight) checkEquiv tLeft tRight err -- Check the effect of the right is smaller than that -- of the left, and add a weakeff cast if nessesary effWeak <- makeEffectWeakening T.kEffect effLeft effRight err -- Check that the closure of the right is smaller than that -- of the left, and add a weakclo cast if nessesary. cloWeak <- makeClosureWeakening config kenv' tenv' lhs_full' rhs' -- Check that all the bound variables are mentioned -- in the left-hand side. checkUnmentionedBinders bs' lhs_full' -- No BAnons allowed. -- We don't handle deBruijn binders. checkAnonymousBinders bs' -- No lets or lambdas in left-hand side. -- We can't match against these. checkValidPattern lhs_full -- Count how many times each binder is used in the right-hand side. bs'' <- countBinderUsage bs' rhs -- Get the free variables of the rule. let binds = Set.fromList $ Maybe.catMaybes $ map (T.takeSubstBoundOfBind . snd) bs let freeVars = Set.toList $ (C.freeX T.empty lhs_full' `Set.union` C.freeX T.empty rhs) `Set.difference` binds return $ RewriteRule bs'' csSpread lhs' hole' rhs' effWeak cloWeak freeVars -- | Extend kind and type environments with a rule's binders. -- Which environment a binder goes into depends on its BindMode. -- Also return list of binders which have been spread. extendBinds :: Ord n => [(BindMode, Bind n)] -> KindEnv n -> TypeEnv n -> (T.KindEnv n, T.TypeEnv n, [(BindMode, Bind n)]) extendBinds binds kenv tenv = go binds kenv tenv [] where go [] k t acc = (k,t,acc) go ((bm,b):bs) k t acc = let b' = S.spreadX k t b (k',t') = case bm of BMSpec -> (T.extend b' k, t) BMValue _ -> (k, T.extend b' t) in go bs k' t' (acc ++ [(bm,b')]) -- | Type check the expression on one side of the rule. checkExp :: (Ord n, Show n, Pretty n) => C.Config n -> KindEnv n -- ^ Kind environment of expression. -> TypeEnv n -- ^ Type environment of expression. -> Side -- ^ Side that the expression appears on for errors. -> Exp a n -- ^ Expression to check. -> Either (Error a n) (Exp (C.AnTEC a n) n, Type n, Effect n, Closure n) checkExp defs kenv tenv side xx = let xx' = S.spreadX kenv tenv xx in case C.checkExp defs kenv tenv xx' of Left err -> Left $ ErrorTypeCheck side xx' err Right rhs -> return rhs -- | Type check a constraint on the rule. checkConstraint :: (Ord n, Show n, Pretty n) => C.Config n -> KindEnv n -- ^ Kind environment of the constraint. -> Type n -- ^ The constraint type to check. -> Either (Error a n) (Kind n) checkConstraint defs kenv tt = case T.checkType (C.configPrimDataDefs defs) kenv tt of Left _err -> Left $ ErrorBadConstraint tt Right k | T.isWitnessType tt -> return k | otherwise -> Left $ ErrorBadConstraint tt -- | Check equivalence of types or error checkEquiv :: Ord n => Type n -- ^ Type of left of rule. -> Type n -- ^ Type of right of rule. -> Error a n -- ^ Error to report if the types don't match. -> Either (Error a n) () checkEquiv tLeft tRight err | T.equivT tLeft tRight = return () | otherwise = Left err -- Weaken --------------------------------------------------------------------- -- | Make the effect weakening for a rule. -- This contains the effects that are caused by the left of the rule -- but not the right. -- If the right has more effects than the left then return an error. -- makeEffectWeakening :: (Ord n, Show n) => Kind n -- ^ Should be the effect kind. -> Effect n -- ^ Effect of the left of the rule. -> Effect n -- ^ Effect of the right of the rule. -> Error a n -- ^ Error to report if the right is bigger. -> Either (Error a n) (Maybe (Type n)) makeEffectWeakening k effLeft effRight onError -- When the effect of the left matches that of the right -- then we don't have to do anything else. | T.equivT effLeft effRight = return Nothing -- When the effect of the right is smaller than that of -- the left then we need to wrap it in an effect weaking -- so the rewritten expression retains its original effect. | T.subsumesT k effLeft effRight = return $ Just effLeft -- When the effect of the right is more than that of the left -- then this is an error. The rewritten expression can't have -- can't have more effects than the source. | otherwise = Left onError -- | Make the closure weakening for a rule. -- This contains a closure term for all variables that are present -- in the left of a rule but not in the right. -- makeClosureWeakening :: (Ord n, Pretty n, Show n) => C.Config n -- ^ Type-checker config -> T.Env n -- ^ Kind environment. -> T.Env n -- ^ Type environment. -> Exp (C.AnTEC a n) n -- ^ Expression on the left of the rule. -> Exp (C.AnTEC a n) n -- ^ Expression on the right of the rule. -> Either (Error a n) [Exp (C.AnTEC a n) n] makeClosureWeakening config kenv tenv lhs rhs = let lhs' = removeEffects config kenv tenv lhs supportLeft = support Env.empty Env.empty lhs' daLeft = supportDaVar supportLeft wiLeft = supportWiVar supportLeft spLeft = supportSpVar supportLeft rhs' = removeEffects config kenv tenv rhs supportRight = support Env.empty Env.empty rhs' daRight = supportDaVar supportRight wiRight = supportWiVar supportRight spRight = supportSpVar supportRight Just a = takeAnnotOfExp lhs in Right $ [XVar a u | u <- Set.toList $ daLeft `Set.difference` daRight ] ++ [XWitness (WVar u) | u <- Set.toList $ wiLeft `Set.difference` wiRight ] ++ [XType (TVar u) | u <- Set.toList $ spLeft `Set.difference` spRight ] -- | Replace all effects with !0. -- This is done so that when @makeClosureWeakening@ finds free variables, -- it ignores those only mentioned in effects. removeEffects :: (Ord n, Pretty n, Show n) => C.Config n -- ^ Type-checker config -> T.Env n -- ^ Kind environment -> T.Env n -- ^ Type environment -> Exp a n -- ^ Target expression - has all effects replaced with bottom. -> Exp a n removeEffects config = transformUpX remove where remove kenv _tenv x | XType et <- x , Right k <- T.checkType (C.configPrimDataDefs config) kenv et , T.isEffectKind k = XType $ T.tBot T.kEffect | otherwise = x -- Structural Checks ---------------------------------------------------------- -- | Check for rule variables that have no uses. checkUnmentionedBinders :: (Ord n, Show n) => [(BindMode, Bind n)] -> Exp (C.AnTEC a n) n -> Either (Error a n) () checkUnmentionedBinders bs expr = let used = C.freeX T.empty expr `Set.union` C.freeT T.empty expr binds = Set.fromList $ Maybe.catMaybes $ map (T.takeSubstBoundOfBind . snd) bs in if binds `Set.isSubsetOf` used then return () else Left ErrorVarUnmentioned -- | Check for anonymous binders in the rule. We don't handle these. checkAnonymousBinders :: [(BindMode, Bind n)] -> Either (Error a n) () checkAnonymousBinders bs | (b:_) <- filter T.isBAnon $ map snd bs = Left $ ErrorAnonymousBinder b | otherwise = return () -- | Check whether the form of the left-hand side of the rule is valid -- we can only match against nested applications, and not general -- expressions containing let-bindings and the like. checkValidPattern :: Exp a n -> Either (Error a n) () checkValidPattern expr = go expr where go (XVar _ _) = return () go (XCon _ _) = return () go x@(XLAM _ _ _) = Left $ ErrorNotFirstOrder x go x@(XLam _ _ _) = Left $ ErrorNotFirstOrder x go (XApp _ l r) = go l >> go r go x@(XLet _ _ _) = Left $ ErrorNotFirstOrder x go x@(XCase _ _ _) = Left $ ErrorNotFirstOrder x go (XCast _ _ x) = go x go (XType t) = go_t t go (XWitness _) = return () go_t (TVar _) = return () go_t (TCon _) = return () go_t t@(TForall _ _) = Left $ ErrorNotFirstOrder (XType t) go_t (TApp l r) = go_t l >> go_t r go_t (TSum _) = return () -- | Count how many times each binder is used in right-hand side. countBinderUsage :: Ord n => [(BindMode, Bind n)] -> Exp a n -> Either (Error a n) [(BindMode, Bind n)] countBinderUsage bs x = let Just (U.UsedMap um) = liftM fst $ takeAnnotOfExp $ U.usageX x get (BMValue _, BName n t) = (BMValue $ length $ Maybe.fromMaybe [] $ Map.lookup n um , BName n t) get b = b in return $ map get bs -- | Allow the expressions and anything else with annotations to be reannotated instance Reannotate RewriteRule where reannotate f (RewriteRule bs cs lhs hole rhs eff clo fv) = RewriteRule bs cs (re lhs) (fmap re hole) (re rhs) eff (map re clo) fv where re = reannotate f
|
http://hackage.haskell.org/package/ddc-core-simpl-0.3.1.1/docs/src/DDC-Core-Transform-Rewrite-Rule.html
|
CC-MAIN-2015-11
|
refinedweb
| 2,120
| 67.86
|
Nix writes: > On 7 Feb 2012, Stefan Monnier said: > > >> I've never understood what's wrong with including cl.el, nor why the > > > > The main issue is namespace. If someone goes through the code to rename > > it all to "cl-*", then we won't need to avoid using it. > > Aha. I'd agree with *that*: it's always been hellishly unclear which > things are in cl or not. Oh, s**t. *We* do *not* agree, since we're moving in the direction of exact Common Lisp conformance for the subset of features we provide (I think Aidan has the intention of reducing cl.el to ;;; cl --- Common Lisp emulation for Emacsen (provide 'cl) ;;; end of cl.el in the near future). Not that anybody on this list *should* care about that, but just in case.... ;-)
|
http://lists.gnu.org/archive/html/emacs-devel/2012-02/msg00260.html
|
CC-MAIN-2016-36
|
refinedweb
| 135
| 81.02
|
After).
example.com
This related question seems to support that conclusion, but I'm still not sure about what other rules there are around naming Active Directory domains.
Are there any best practices on what an Active Directory name should or shouldn't be?
This has been a fun topic of discussion on Server Fault. There appear to be varying "religious views" on the topic.
I agree with Microsoft's recommendation: Use a sub-domain of the company's already-registered Internet domain name.
So, if you own foo.com, use ad.foo.com or some such.
foo.com
ad.foo.com
The most vile thing, as I see it, is using the registered Internet domain name, verbatim, for the Active Directory domain name. This causes you to be forced to manually copy records from the Internet DNS (like www) into the Active Directory DNS zone to allow "external" names to resolve. I've seen utterly silly things like IIS installed on every DC in an organization running a web site that does a redirect such that someone entering foo.com into their browser would be redirected to by these IIS installations. Utter silliness!
www
Using the Internet domain name gains you no advantages, but creates "make work" every time you change the IP addresses that external host names refer to. (Try using geographically load-balanced DNS for the external hosts and integrating that with such a "split DNS" situation, too! Gee-- that would be fun...)
Using such a subdomain has no effect on things like Exchange email delivery or User Principal Name (UPN) suffixes, BTW. (I often see those both cited as excuses for using the Internet domain name as the AD domain name.)
I also see the excuse "lots of big companies do it". Large companies can make boneheaded decisions as easily (if not moreso) than small companies. I don't buy that just because a large company makes a bad decision that somehow causes it to be a good decision.
corp
foo
To assist MDMarra's answer:
You should NEVER use a single-label DNS name for your domain name either. This was/is available prior to Windows 2008 R2. Reasons/explanations can be found here:
Don't forget to NOT use reserved words (a table is included in the "Naming Conventions" link at the bottom of this post), such as SYSTEM or WORLD or RESTRICTED.
I also agree with Microsoft in that you should follow two additional rules (that aren't set in stone, but still):
Finally, I would recommend that you think long term as much as possible. Companies do go through mergers and acquisitions, even small companies. Also think in terms of getting outside help/consultation. Use domain names, AD structure, etc. that will be explainable to consultants or people here on SF without much effort.
Knowledge links:
Microsoft's current (W2k12) recommendation page for the root forest domain name
There are only two correct answers to this question.
An unused sub-domain of a domain that you use publicly. For example, if your public web presence is example.com your internal AD might be named something like ad.example.com or internal.example.com.
ad.example.com
internal.example.com
An unused second-level domain that you own and don't use anywhere else. For example, if your public web presence is example.com your AD might be named example.net as long as you have registered example.net and don't use it anywhere else!
example.net
These are your only two choices. If you do something else, you're leaving yourself open to a lot of pain and suffering.
But everyone uses .local!
Doesn't matter. You shouldn't. I've blogged about the use of .local and other made up TLDs like .lan and .corp. Under no circumstances should you ever do this.
It's not more secure. It's not "best practices" like some people claim. And it doesn't have any benefit over the two choices that I've proposed.
But I want to name it the same as my public website's URL so that my users are example\user instead of ad\user
This is a valid, but misguided concern. When you promote the first DC in a domain, you can set the NetBIOS name of the domain to whatever you want it to be. If you follow my advice and set up your domain to be ad.example.com, you can configure the domain's NetBIOS name to be example so that your users will log on as example\user.
example\user
ad\user
example
In Active Directory Forests and Trusts, you can create additional UPN suffixes as well. There's nothing stopping you from creating and setting @example.com as the primary UPN suffix for all accounts in your domain. When you combine this with the previous NetBIOS recommendation, no end user will ever see that your domain's FQDN is ad.example.com. Everything that they see will be example\ or @example.com. The only people that will need to work with the FQDN are the systems admins that work with Active Directory.
example\
@example.com
Also, assume that you use a split-horizon DNS namespace, meaning that your AD name is the same as your public-facing website. Now, your users can't get to example.com internally unless you have them prefix www. in their browser or you run IIS on all of your domain controllers (this is bad). You also have to curate two non-identical DNS zones that share a disjoint namespace. It's really more hassle than it's worth. Now imagine that you have a partnership with another company and they also have a split-horizon DNS configuration with their AD and their external presence. You have a private fiber link between the two and you need to create a trust. Now, all of your traffic to any of their public sites has to traverse the private link instead of just going out over the Internet. It also creates all kinds of headaches for the network admins on both sides. Avoid this. Trust me.
www.
But but but...
Seriously, there's no reason not to use one of the two things that I've suggested. Any other way has pitfalls. I'm not telling you to rush to change your domain name if it's functioning and in place, but if you're creating a new AD, do one of the two things that I've recommended above.
I always do mydomain.local.
mydomain.local
local is not a valid TLD, so it never competes with an actual public DNS entry.
local
For example, I like being able to know that web1.mydomain.local will resolve to the internal IP of a web server, while web1.mydomain.com will resolve to the external IP.
web1.mydomain.local
web1.mydomain.com
.local
By posting your answer, you agree to the privacy policy and terms of service.
asked
6 years ago
viewed
55147 times
active
1 year ago
|
http://serverfault.com/questions/76715/windows-active-directory-naming-best-practices?answertab=active
|
CC-MAIN-2016-18
|
refinedweb
| 1,179
| 65.73
|
Details
- Type:
Improvement
- Status: Resolved
- Priority:
Minor
- Resolution: Fixed
- Affects Version/s: 4.0.2
-
- Component/s: Annotations
- Labels:None
Description
Why do I have to type @Component(type = "TextField") when the return type of the abstract method below it is TextField? Can't this be inferred?
Activity
- All
- Work Log
- History
- Activity
- Transitions
I think it will not be appropriate because of how component libraries work: two components in two separate libraries can easily share the same Java class.
It may work in some cases (for example in case of framework components) but it would introduce the special handling of those components...
The type parameter of Component annotation is in fact necessary in some cases,
i.e. when the component name is different that the component classname (@If and class IfBean)
or when the component is part of a library (and thus needs a prefix)
However, i too believe that if type is left blank the framework should try to fill in the details...
I don't see any special handling...
If type is left blank, the framework fills it with method.returnType.simpleName
An exception would have been thrown anyway...
If the type parameter is missing then the framework can iterate through all components of the given namespace.
- if there is exactly 1 component with the given Java class then it should be used
- otherwise an exc. should be thrown
I'm not sure if this feature worth the effort...
I think that handling the name of the method's return type as component type may be misleading...
I don't think there's a list of the components of the framework or the included libraries
anywhere at the time this enhancement worker processes the annotation.
So, complete component guessing is out of the question anyway...
We're simply talking about having a nice default here - not a complete
discovery service...
I agree. We don't need to be that smart about this. The default if no type annotation is specified is the return type of the method. If there is a special case where this is not correct, then the type annotation will need to be specified in order to handle it. It is no worse for the special cases than the current behavior and is better for the typical case.
Sounds reasonable to me.
|
https://issues.apache.org/jira/browse/TAPESTRY-994
|
CC-MAIN-2017-13
|
refinedweb
| 388
| 62.78
|
FeedManager is a custom RSS/Atom Syndication generator that allows for quick and efficient generation of RSS/Atom feeds. Just reference the dll in your web applications, then write a few lines of code to expose your data to external applications/web sites.
The rsstoolkit exists for generating syndicated feeds on the fly, but if, like me, you have had issues porting the toolkit between project types and have to keep messing with web.config settings to get it to work, then the feedmanager.dll approach might work for you.
Here's how to get started:
protected void Page_Load(object sender, EventArgs e) { using (FeedManager.FeedCreator aFeed = new FeedManager.FeedCreator(FeedType.RSS)) { aFeed.FeedUrl = ""; aFeed.FeedTitle = "My RSS Feed Title"; aFeed.FeedDescription = "My RSS Feed Description"; aFeed.FeedCopyright = "(c) 2009. All rights reserved"; aFeed.FeedImage = ""; //add items to the feed aFeed.AddItem(new FeedItem(Guid.NewGuid().ToString(), "first Item", "first summary", "Lorem ipsum dolor sit amet.", "thefirstlink", "1.aspx", DateTime.Now.ToString(), "Jaycent", "info@solid2.com")); aFeed.AddItem(new FeedItem(Guid.NewGuid().ToString(), "second Item", "second summary", "Lorem ipsum dolor sit amet.", "thesecondlink", "", DateTime.Now.ToString(), "Jaycent", "info@solid2.com")); aFeed.AddItem(new FeedItem(Guid.NewGuid().ToString(), "third Item", "third summary", "Lorem ipsum dolor sit amet.", "thethirdlink", "", DateTime.Now.ToString(), "Jaycent", "info@solid2.com")); aFeed.AddItem(new FeedItem(Guid.NewGuid().ToString(), "fourth Item", "fourth summary", "Lorem ipsum dolor sit amet.", "thefourthlink", "", DateTime.Now.ToString(), "Jaycent", "info@solid2.com")); aFeed.AddItem(new FeedItem(Guid.NewGuid().ToString(), "fifth Item", "fifth summary", "Lorem ipsum dolor sit amet.", "thefifthlink", "", DateTime.Now.ToString(), "Jaycent", "info@solid2.com")); aFeed.AddItem(new FeedItem(Guid.NewGuid().ToString(), "sixth Item", "sixth summary", "Lorem ipsum dolor sit amet.", "thesixthlink", "", DateTime.Now.ToString(), "Jaycent", "info@solid2.com")); //generate the feed, send results to browser window. aFeed.GenerateFeed(); } }
Navigate to the page to view the ouput!
Play around with setting the FeedType setting in the constructor between Feedtype.RSS and FeedType.Atom. View the source of the generated file to examine the difference in the output. For an excellent comparison article for ATOM vs RSS, see Aaron Brazell article here.
Happy new year, and happy programming!
Developers often find themselves having to connect to data in a variety of datasources ranging from MS Access to large scale relational databases such as Oracle, SQL Server, MySql etc. Each different datasource type typically requires importing a different .Net provider-specific namespace for working wth a specific database. For instance, to connect to an Oracle Database the consuming application needs to import the System.Data.OracleClient Namespace and untilize classes such as OracleCommand, OracleDateReader etcettera. To connect to SqlServer applications developers import the System.Data.SqlClient namespace into their applications and program against the classes provided by the namespace. The net effect of having multpile provider specific namespaces is that your DAL components are rarely ever portable, with each different database type normally requiring provider specific code.System.Data.GenericClient is a simple but generic custom built data access component that solves some of the issues outlined above. System.Data.GenericClient provides a very simple but familiar API that can be configured declaratively in an application configuration file as well as programmatically in your DAL. Below I'll take a look at using System.Data.Generic to connect to various datasources using a consistent API.
Here's what you need to do to get started using System.Data.GenericClient in your applications:
The code snippet above shows how simple it is to use System.Data.GenericClient to access data from mySQL in a provider independent format. For each aditional datasource your application needs to connect to, specify a valid connection string in the applications configuration file as outlined above, then specify a valid command text or stored procedure on GenericClient's command object. See below for a list of operations supported by the GenericClient object:
public
System.Data.GenericClient remains a work in progress. In addition to the features highlighted above, you will find other features in the namespace. These and other features will be further refined and documented in future updates.
File Download: System.Data.GenericClient.zip Want a copy of the source code? Send email to jaycentdrysdale@hotmail.com
LIN.
If:
Recently I was challenged by a colleague who asked some very basic questions relating to object orient programming. I know these concepts very well, but sometimes you have to stop and think twice about them, especially if most often than not, you do not implement these concepts in your day to day programming activities. As developers, we often find ourselves in situations where the need to get things out the door and into production tend to encourage us to not think about doing things the right way, but instead, getting the work done in the least possible time. Object Oriented concepts are something every serious developer should know inside out. Sure you can get by without practicing it, but that approach might come back to haunt you down the road.Classes are the foundation of Object Oriented Programming. Business Logic code should be seperated into groups of related classes that satisfy a business need. There should be a clear seperation of concerns. When planning a development strategy for our applications, we should at all times try to envision our applications as being seperated into three logical tiers. Strictly speaking, having three logical tiers doesnt dictate that our apps must to be deployed physically as a three tier app...all the tiers could live on one one physical machine, or they could be deployed accross three different machines. They could also be deployed on multiple machines in the enterprise, in what is known as an n-tier architecture. Our apps will perform much better when we distribute the processing requirements accross multiple machines...business logic components run on machine A, Data Access components run seperate machine, and the front end logic running on a clinet pc or web server somewhere else. Microsft Transaction Server, and now Enterpise Services can act as a broker for our components living in the data access and business logic tiers, and offer such services as transactions support, object pooling, which promotes better scaling applications.So, what is an abstract class. What is a sealed class. What is the difference accessibility methods out there that I can apply to my classes and methods, and why do I need them? What are interfaces? when is it useful to use an interface? These are the questions that rolled through my head when I was first introduced to OOP years ago. Over time, as you develop more business level applications, the importance of these concepts become clearer.Inheritance is a pupular term among the OOP purists. So what is Inheritance, and how does it relate to OOP? Perhaps the best way to describe this would be to imagine a hypothetical situation where you are building an app that requires you to Mainatin a list of employee records. All employees must have an employeeID, they must have a first name, and they must have a departmentID etc. In addition, there are some employees that work on a part-time basis, and some that work full-time.Looking at the above scenario, we have indentified the need for an employee class to service our application. but before we start building our employee class, we need to take a closer look at how that class will be implemented. There are common features that all employees have, but there are also some features that are specific to the full-time employee and others that are specific to the part-time employee. For instance, when we calculate the monthly pay for the full-time employee, the logic might be different from what we do when we calculate the salary of the guy that is employed part-time. It might help if we define a baseline Employee class that contain the common functionality, properties etc, and then further extend these classes in other derived classes to inplement the specific functionality. So at the root, we might create an employee class, and two derived classes: FullTimeEmployee and ParttimeEmployee. Both these classes will be based on the base Employee class and will inherit any functionality exposed, but will further provide indiviual methods that will be used to calculate an employee's salary. By marking our employee class as Abstract, we are marking it as a class that cannot be instantiated directly...we are saying that this class exists only to be further refined through inheritance. The exact opposite scenario to that would be to mark our class as sealed, which would mean it cannot be further refined through inheritance.
The definition for our base employee class might look like this:public abstract class Employee{}
The definiton for the derived classes might look like so:public sealed class PartTimeEmployee:Employee{}
public sealed class FullTimeEmployee:Employee{}Selected data and function members defined in the Employee class will be avaialable in both the derived classes. Data Members such as firstname, lastname, deptID, employeeID are common to all employees are defined on the Employee base class. The Employee base class will also contain a method called CalculateWages, which will contain no functionality, but will tell any class that implements the Employee class that it must implement a CalculateWages method. In the base class, the signature for the calculateWages Method might look like this: public abstract decimal CalculateWages();
The scenario looked at above describes a very simple example of how we might implement a simple employee class in one of our applications. Of course, there is much more to object oriented programming, to much to be described in a single blog post. Of course, there are many different ways to implement OOP...for instance, the employee base class described above could be implemented as an interface instead of as an abstract class.
In Part 2 of this 4 part series, I'll take a look at other class modifiers, and also some of the accessibility options that we have available for exposing our classes to the outside world and what they mean for our classes. I'll also take a look at Interfaces, and how they can be used in the Employee list management scenario described in this article.
The next release of Microsoft SQL server, version 2008, promises a number of significant changes over its previous 2005 version release. From a Developers perspective, there is much anticipation over the new data types that we will get to play with in our stored procedures and other database objects. I’ll go through some of the new data types in this post.
New Date data types:Date:-The new Date data type allows you to store dates without a time component from 0001-01-01 to 9999-01-01. This new type will lend itself well to situations where a date variable doesn’t need to have a time attached to it...like a date of birth field/variable. There is also the Time type, which as you might guess, stores a time value, minus a date portion. Already I'm seeing situations where these Data types would have saved me a few lines of code had hey been available earlier. Other new date types include DateTime2 and DatetimeOffset.
New HierarchyId data typeAccording to the Microsoft documentation, the Hierarchyid is a new data type that can store values that represent nodes in a hierarchy tree. This data type, which has a flexible programming model, can be implemented as a Common Language Runtime User-Defined Type (CLR UDT). The CLR UDT exposes several efficient built-in methods for creating and operating on hierarchy nodes. I am eagerly awaiting further details on the use of this new type, but at first glance, it seems we will now be able to store the hierarchical metadata that describes things like menu trees etc in a format that is easily retrievable. Building menus etc could be as easy as binding a treeview control to a field returned from a database or web-method call. I hope my understanding of this new type is not too far off from what it actually is. Time will tell.
UserDefined Table type:Again, the Microsoft documentation described this new type as follows: A user-defined table type represents the definition of a table structure. You can use a user-defined table type to declare table-value parameters for stored procedures or functions. You can use this table type to declare table variables that are to be used in a batch or in the body of a stored procedure or function. To ensure that the data in the table type meets specific requirements, you should create unique constraints and primary keys on the table type. Actually, this is a neat addition. If I understand this correctly, we can pass in an array (table) of parameter items into a stored procedure as a unit. So instead of passing in 50 individual parameters into a Stored Procedure, It’s now possible to pass in a single UDT type that holds all 50 parameter bits that need to get to the Query.
FILESTREAM storageNow here is where it gets interesting. It’s now possible to store data directly to the file system on the server via the FILESTREAM data type. It allows you to store unstructured data directly in the file system. You can use the new storage type VARBINARY(MAX) FILESTREAM to define table columns and store large binary data as files in the file system instead of storing them as Binary Large Objects (Blobs). In addition, you can use T-SQL statements—SELECT, INSERT, UPDATE, or DELETE—to query and modify FILESTREAM data. You can use the rich set of streaming APIs provided by Win32 for better streaming performance, while maintaining transactional consistency. You can also apply SQL Server functionalities to FILESTREAM data such as triggers, full-text search, backup and restore, SQL permissions, Database Console Command (DBCC) checks, and replication.Sparse ColumnsTypically, if you have a column/field in your database table that is infrequently used, and contains a null value in most cases, the new Sparse Columns feature provides a more efficient way to represent these. In wrapping up, its also worth mentioning that the UDT types, introduced in SQL Server 2005, have undergone some changes in the 2008 release. Previously UDT types were limited a size of 8K....that limitation has been removed.
In the world of .NET programming, ADO.NET has become the De Facto standard for accessing database of all types (relational or otherwise). The purists among us will be quick to point out that ADO.Net should be the only option and that as developers we shouldn’t even be thinking about bypassing ADO.NET to get to our database. But the no-so-pure will be quick to point out the alternative methods for pulling data from our databases without writing a line of database code. If you are still reading this article, it means you, like your truly have had occasions in the past where bypassing ADO.Net makes sense, whether you are creating a small website for your wedding guest list, or just putting together a demo website of sorts.The SqlDataSource Control:The SqlDataSource control (in my mind, the grand-daddy of declarative data Access) allows for the definition of queries in a declarative way. You can connect the SqlDataSource to controls such as the Datalist, and give your users the option to edit and update data without requiring any ADO.NET code. While the sqlDatasource control handles the heavy lifting required to facilitate the communication, it's worth mentioning that behind the scenes, it uses ADO.NET to do this heavy-lifting. The sqlDatasource supports any database that has a full ADO.NET provider. Apart from the fact that the sqlDatasource connects you to your database with minimal code, it does provide more benefits to developers who are looking to provide functionality such as paging sorting of datagrids without having to write lines of code to achieve this. If you bind your datagrid directly to a sqlDatasource control, you have paging and sorting functionality right at your fingertips. You can get to it by setting a few properties on your grid control. However, with all its "niceties" the SqlDataSource is somewhat controversial, because it encourages you to place database logic in the markup portion of your page. Many will agree that there are times when the benefits of using this control far outweigh this particular drawback.LINQ to SQL:Linq provides us with cool new way of accessing data. The source of this data can range from anything from files in your computer file system, a collection of objects living in a generic list in server memory, or rows of data residing in a database table in on the North Pole. Linq comes in a variety of flavors (LINQ to objects, LINQ to XML, LINQ to entities, LINQ to SQL, LINQ to Dataset). With LINQ to SQL, you define a query using C# code (or the LinqDataSource control) and the appropriate database logic is generated automatically. LINQ to SQL supports updates, generates secure and well-written SQL statements, and provides some customizability. Like the SQLDataSource control, LINQ to SQL doesn’t allow you to execute database commands that don’t map to straightforward queries and updates (such as creating tables). Unlike the SqlDataSource control, LINQ to SQL only works with SQL Server and is completely independent of ADO.NET.Profiles: The profiles feature, introduced in .Net Framework 2.0, allows you to store user-specific blocks of data in a database without writingADO.NET code. You specify what data you want to gather and store by configuring the appropriate elements in your applications configuration file.
In wrapping up, it’s worth mentioning that none of options presented in this article is a replacement for ADO.NET, because none of them offers the full flexibility, customizability, and performance that hand-written database code offers. However, depending on the specific needs of your application, it may beworth using one or more of these features to augment your ADO.Net centric data access layer.
The January publication of MSDN Magazine carried an article titled "Enhance your apps with Integrated ASP.Net Pipleline". While the subject of the article is not something that I would normally be super excited about, I decidede to spend at least 10 minutes perusing the article while commuting to work this morning (Been riding the train past couple of weeks). I recently installed Vista on my Notebook, and was keen to start looking into the features of IIS 7.0, which ships with microsoft's latest desktop operating system. As it turned out, the article touched on a subject that might be useful down the road as we take a leap into the world of PHP programming at work.Its long been common knowledge that PHP could run under IIS, but not in a way that would make it feasible to run production apps...in other words, if you configure your PHP apps in IIS 6.0/5.0, the result would be a web site that runs very slow, if nothing else, due mainly to the lack of thread safety in PHP apps. Alternately, the app could be configured to run under IIS using CGI, but CGI is a resource hog (one process per request) and results in an app that scale poorly in IIS.
ASP.NET Web.Config files provide a configuration system we can use to keep our applications flexible at runtime. In this article we will examine a simple technique that will change as our applications are moved from a testing/staging server into a production environment.
Developers often find themselves having to take extra care not to overwrite web.config file as they move code from one environment to the next, typically from develeopment, to staging/testing, and finally to production. Let's take a look at a little known feature of the appSettings element that can give us even more flexibility (see source code in url below).
The appSettings element may contain a file attribute that points to an external file.If the external file is present, ASP.NET will combine the appSettings values from web.config with those in the external file. If a key/value pair is present in both files, ASP.NET will use the value from the external file.
This feature.
In the attached code snippet (see link below), lines 2 through 8 shows a typical implemention of a web.config file that points to an external configuration file. Lines 13 through 15 shows what the external file might look like.
|
http://weblogs.asp.net/jaycentdrysdale/
|
crawl-002
|
refinedweb
| 3,446
| 54.22
|
@eang Thanks. Updated the URL.
Search Criteria
Package Details: unarchiver 1:1.10.1-1
Dependencies (7)
- bzip2
- gcc-libs (gcc-libs-git, gcc-libs-multilib-git, gcc-libs-multilib-x32, gcc-libs-multilib)
- gnustep-base (gnustep-base-clang-svn)
- ic)
- gcc-objc (gcc-objc-git, gcc-objc-multilib-git, gcc-objc-multilib-x32, gcc-objc-multilib) (make)
Required by (5)
- kde-servicemenus-unarchiver
- rpg2003-rtp (make)
- vcmi-demo (make)
- yumenikki-jp (make)
- yumenikki-zh-cn (make)
Sources (2)
Latest Comments
cgirard commented on 2017-08-21 13:58
eang commented on 2017-07-31 14:18
Found it, the new repo lives here:
eang commented on 2017-07-31 14:06
Still haven't found a new link for the source code.
eang commented on 2016-03-10 09:35
You're right :(
The PKGBUILD I linked installs libgnustep-base.so.1.24 under /usr/lib/unarchiver, which is not a good thing to do... (and indeed, namcap complains about it).
cgirard commented on 2016-02-29 14:59
@elv: I confirm daimonion observation
ldd /usr/bin/unar => [...]/usr/lib/libgnustep-base.so.1.24[...]
daimonion commented on 2016-02-28 16:10
I uninstalled gnustep-base and this is a result:
unar: error while loading shared libraries: libgnustep-base.so.1.24: cannot open shared object file: No such file or directory
eang commented on 2016-02-28 15:38
@cgirard: could you please move gnustep-base to makedepends? It is actually a build dependency and not a runtime one. See also this pkgbuild [1] which builds and works just fine.
[1]:
GuestOne commented on 2013-11-07 15:04
Please update to 1.8.1
rtfreedman commented on 2013-09-11 16:14
I've tried right after updating gnustep libs yesterday and failed.
Now I've tried again and succeeded... strange!
Sorry for the noise.
cgirard commented on 2013-09-10 21:04
Strange. It works for me and functions.h is in The Unarchiver/XADMaster/libxad/include/ so it should be found.
rtfreedman commented on 2013-09-10 19:42
Doesn't build anymore:
libxad/debug.c:26:23: fatal error: functions.h: No such file or directory
#include "functions.h"
^
compilation terminated.
make: *** [Build/libxad/all.o] Error 1
==> ERROR: A failure occurred in build().
cgirard commented on 2013-05-14 11:40
@dx: I modified the description to include unar & lsar in it.
dequis commented on 2013-05-12 09:37
You should name this package "unar" then, or at least mention in the description that it's not the mac OS X app called "unarchiver" and that it includes unar and lsar. I was looking for the commandline tool and was confused about this package, had to check the PKGBUILD to confirm. But it was what I need, so thanks for maintaining this.
cgirard commented on 2012-11-26 21:37
I have switched the package to "unar" source files as it reduces unnecessary downloads. The version number has been switched accordingly.
cgirard commented on 2012-10-23 18:15
The CLI is multi-OS:
GuestOne commented on 2012-10-23 17:57
Latest version is 3.4.
But my question is: this is not a Mac software only? is a command line tool or have a gui?
cgirard commented on 2012-07-15 18:43
Orphaning the pkgbuild temporarily. I will be back in 2 week and will adopt it back if still orphan.
rtfreedman commented on 2012-05-13 05:25
current version 3.2
daimonion commented on 2012-04-14 21:59
3.1 was released 8 days ago.
cgirard commented on 2012-04-06 14:33
Yes thank you fauno and Demon. I'm laking a bit of time right now. I try to update this, this week-end.
daimonion commented on 2012-04-06 10:22
cgirard, use PKGBUILD fauno provided, it works. Also please update it with following (it will install man files as well):
package() {
cd "$srcdir/XADMaster"
install -d "$pkgdir/usr/bin/"
install -d "$pkgdir/usr/share/man/man1"
install -m755 unar lsar "$pkgdir/usr/bin/"
cd "$srcdir/Extra"
gzip -c lsar.1 > lsar.1.gz
gzip -c unar.1 > unar.1.gz
install -m644 unar.1.gz lsar.1.gz "$pkgdir/usr/share/man/man1"
}
fauno commented on 2012-04-02 01:29
I'm building 3.0 right now. PKGBUILD will be at
Thanks for your pkgbuild!
cgirard commented on 2012-03-21 13:29
Thanks. Linux makefile has disappeared and Linux support does not seems to appear on the website anymore. I'll have a deeper look to see if this can still works.
daimonion commented on 2012-03-21 13:19
3.0 is out.
cgirard commented on 2011-12-22 10:02
OK sorry about the long delay. Hope it works now.
I had to put back a patch that was no more needed. Seems we are dependent of compile flags used by gnustep-base. Had to add another one for a missink link directive for libz.
Anonymous comment on 2011-12-21 18:37
getting the same error :(
evanlec commented on 2011-12-11 21:23
@andy123 can you explain how you recompiled gnustep-base exactly? did you change config options when building or something?
having same problem as @adaptee. Thanks
ajs124 commented on 2011-10-10 20:30
@adaptee
i got the same error, but after recompiling gnustep-base, it worked.
adaptee commented on 2011-10-08 11:19
It fails to build. Any suggestion?
==> Starting build()... unar.m -o unar.o CSCommandLineParser.m -o CSCommandLineParser.o
In file included from /usr/include/Foundation/NSClassDescription.h:30:0,
from /usr/include/Foundation/Foundation.h:50,
from CSCommandLineParser.h:1,
from CSCommandLineParser.m:1:
/usr/include/Foundation/NSException.h:44:2: error: #error The current setting for native-objc-exceptions does not match that of gnustep-base ... please correct this.
In file included from /usr/include/Foundation/NSClassDescription.h:30:0,
from /usr/include/Foundation/Foundation.h:50,
from XADUnarchiver.h:1,
from unar.m:1:
/usr/include/Foundation/NSException.h:44:2: error: #error The current setting for native-objc-exceptions does not match that of gnustep-base ... please correct this.
make: *** [CSCommandLineParser.o] Error 1
make: *** Waiting for unfinished jobs....
make: *** [unar.o] Error 1
==> ERROR: A failure occurred in build().
cgirard commented on 2011-05-31 18:36
I perfectly understand your point but I won't add such a field. Please take a look at this thread on the ML:
psychedelicious commented on 2011-05-31 17:49
The reason I asked for the "provides" line is to satisfy dependencies on packages that rely on unrar. Unrar is not 100% FLOSS whereas unarchiver is. I hope this clears things up for people.
rtfreedman commented on 2011-05-23 00:36
As a side note, I was able to unpack ancient .arc archives from 1986 for the first time - thanks unarchiver :)
rtfreedman commented on 2011-05-23 00:00
I'm not yet familiar with PKGBUILD's - but it seems everything is as it should be.
Unless there is a generic "provides" it doesn't really help and it clearly doesn't replaces anything.
cgirard commented on 2011-05-22 12:43
However, "provides" is different than "replaces". The Unarchiver does provides the same functionality than unrar (but with a different binary and syntax).
I'm not really sure what is the correct approach there.
rtfreedman commented on 2011-05-22 12:11
Pardon, I've used Parabola's build and didn't bother to look at your PKGBUILD.
cgirard commented on 2011-05-22 09:11
There is no provide field in this pkgbuild.
rtfreedman commented on 2011-05-22 00:30
Please remove provides=('unrar')
The binaries are named lsar and unar - so there is no conflict with unrar and no need to uninstall it!
psychedelicious commented on 2011-05-11 21:27
can you please add the following to the PKGBUILD?
provides=('unrar')
cgirard commented on 2011-05-06 18:42
Great! If you notice any needed improvement please tell me. I'm not used to Objective-C/gnustep packaging.
fauno commented on 2011-05-06 16:27
I've included your package into Parabola's repos :)
|
https://aur.archlinux.org/packages/unarchiver/?comments=all
|
CC-MAIN-2017-43
|
refinedweb
| 1,375
| 60.21
|
I’m working on the final exercise, and have a strange issue. I have previously had difficulties with running nosetests. My file structure is the same as laid out in ex 52:
- /gothonweb
__init__.py
- app.py
- markup.py
- planisphere.py
- /static
- /templates
- layout.html
- show_room.html
- you_died.html
- /tests
- app_tests.py
- planisphere_tests.py
app.py runs just fine on localhost in the browser, but when nosetests run I get a failure. The import statements in planisphere_tests.py are taken directly from the exercise:
from nose.tools import * from gothonweb.planisphere import *
Traceback is:
File "/Users/Zeesy/hard_way/projects/gothonweb/app.py", line 3, in <module> import planisphere ModuleNotFoundError: No module named 'planisphere'
When run with:
from nose.tools import * import planisphere
Traceback is:
File "/Users/Zeesy/hard_way/projects/gothonweb/tests/planisphere_tests.py", line 2, in <module> import planisphere ModuleNotFoundError: No module named 'planisphere'
As mentioned, app.py runs the local server with no problems and the game works perfectly in the browser. What is going on here?
|
https://forum.learncodethehardway.com/t/ex52-import-error-failed-nosetest-app-py-still-runs-in-browser/3000
|
CC-MAIN-2022-40
|
refinedweb
| 168
| 53.58
|
99 questions/Solutions/50
From HaskellWiki
Latest revision as of 19:53, 18 January))) [])
A relatively short solution:
import Data.List (sortBy, insertBy) import Data.Ord (comparing) import Control.Arrow (second) huffman :: [(Char, Int)] -> [(Char, String)] huffman = let shrink [(_, ys)] = sortBy (comparing fst) ys shrink (x1:x2:xs) = shrink $ insertBy (comparing fst) (add x1 x2) xs add (p1, xs1) (p2, xs2) = (p1 + p2, map (second ('0':)) xs1 ++ map (second ('1':)) xs2) in shrink . map (\(c, p) -> (p, [(c ,"")])) . sortBy (comparing snd)
Another short solution that's relatively easy to understand (I'll be back to comment later):
import qualified Data.List as L huffman :: [(Char, Int)] -> [(Char, [Char])] huffman x = reformat $ huffman_combine $ resort $ morph x where morph x = [ ([[]],[c],n) | (c,n) <- x ] resort x = L.sortBy (\(_,_,a) (_,_,b) -> compare a b) x reformat (x,y,_) = L.sortBy (\(a,b) (x,y) -> compare (length b) (length y)) $ zip y x huffman_combine (x:[]) = x huffman_combine (x:xs) = huffman_combine $ resort ( (combine_elements x (head xs)) : (tail xs) ) where combine_elements (a,b,c) (x,y,z) = ( (map ('0':) a) ++ (map ('1':) x), b ++ y, c+z)
|
http://www.haskell.org/haskellwiki/index.php?title=99_questions/Solutions/50&diff=57455&oldid=56885
|
CC-MAIN-2014-35
|
refinedweb
| 188
| 53.51
|
Hi,
Consider the following C++ code:
#include <malloc.h> #include <cmath> #include <complex> int main(int argc, char **argv) { int N = 4000000; double * _arr_4_0; _arr_4_0 = (double *) (malloc((sizeof(double) * (unsigned long) (5331.0)))); for (int _i0 = 0; (_i0 <= 5330); _i0 = (_i0 + 1)) { _arr_4_0[_i0] = std::sin(_i0); } double * _arr_7_7; _arr_7_7 = (double *) (malloc((sizeof(double) * (unsigned long) (((0.1 * (double) (N)) + -66.0))))); #pragma omp parallel for schedule(static) #pragma ivdep for (int _i0 = 0; (_i0 < ((N / 10) - 66)); _i0 = (_i0 + 1)) { _arr_7_7[_i0] = std::sqrt(_i0); } std::complex<double> * _arr_6_8; _arr_6_8 = (std::complex<double> *) (malloc((sizeof(std::complex<double>) * (unsigned long) (((0.1 * (double) (N)) + -5396.0))))); for (int o1 = 0; (o1 < (((N + 110) / 320) - 168)); o1 = (o1 + 1)) { int _ct167 = ((((32 * o1) + 31) < ((N / 10) - 5397))? ((32 * o1) + 31): ((N / 10) - 5397)); for (int o2 = (32 * o1); (o2 <= _ct167); o2 = (o2 + 1)) { _arr_6_8[o2] = (0.0 + 0.0j); } } #pragma omp parallel for schedule(static) for (int o1 = 0; (o1 < (((N + 110) / 320) - 168)); o1 = (o1 + 1)) { for (int o2 = 0; (o2 <= 166); o2 = (o2 + 1)) { int _ct168 = ((((32 * o1) + 31) < ((N / 10) - 5397))? ((32 * o1) + 31): ((N / 10) - 5397)); #pragma unroll_and_jam (6) for (int o3 = (32 * o1); (o3 <= _ct168); o3 = (o3 + 1)) { int _ct169 = ((5330 < ((32 * o2) + 31))? 5330: ((32 * o2) + 31)); #pragma ivdep for (int o4 = (32 * o2); (o4 <= _ct169); o4 = (o4 + 1)) { _arr_6_8[o3] = (_arr_6_8[o3] + (_arr_7_7[((5330 - o4) + o3)] * _arr_4_0[o4])); } } } } return 0; }
I compiled this using the following command (file saved as test.cpp):
icpc -O3 -qopenmp -qopt-report=5 -qopt-report-file=stdout test.cpp > optrpt
However, I get a warning on stderr which says:
test.cpp(38): (col. 7) remark: unroll_and_jam pragma will be ignored due to
There is no reason specified for why the pragma is being ignored. Could you please help me diagnose this?
icpc -V
gives
Intel(R) C++ Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 17.0.1.132 Build 20161005 Copyright (C) 1985-2016 Intel Corporation. All rights reserved.
This bug is also present on
Intel(R) C++ Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 17.0.2.174 Build 20170213 Copyright (C) 1985-2017 Intel Corporation. All rights reserved.
Any suggestions on how to debug this would be appreciated.
Thanks,
Abhinav
Link Copied
An update on the issue:
The warning is issued only when the unroll_and_jam pragma is used with tiled loops. If lines 31-48 are replaced with the code below, no warning is emitted and the outer loop is unroll-jammed.
#pragma omp parallel for schedule(static) #pragma unroll_and_jam (16) for (int _i0 = 0; (_i0 < ((N / 10) - 5396)); _i0 = (_i0 + 1)) { #pragma ivdep for (int _i1 = 0; (_i1 <= 5330); _i1 = (_i1 + 1)) { _arr_6_8[_i0] = (_arr_6_8[_i0] + (_arr_7_7[((5330 + _i0) - _i1)] * _arr_4_0[_i1])); } }
Hi Abhinav,
I will investigate it and will be back with an update shortly. Looks like a bug and I will check your test case with 18.0 Beta compiler version.
Regards,
Igor
The problem is still in 18.0 compiler version. We should correct the remark message for sure to explain the reason. It looks like the loop was distributed on 2 chunks and the innermost loop of chunk 1 was vectorized. Chunk 2 was not vectorized. I will escalate this to the developers.
Thank you for reporting this problem.
|
https://community.intel.com/t5/Intel-C-Compiler/unroll-and-jam-pragma-ignored-but-no-reason-specified/td-p/1088652
|
CC-MAIN-2021-10
|
refinedweb
| 558
| 65.22
|
csLoadResult Struct Reference
Return structure for the iLoader::Load() routines. More...
#include <imap/loader.h>
Detailed Description
Return structure for the iLoader::Load() routines.
Definition at line 135 of file loader.h.
Member Data Documentation
The object that was loaded.
Depending on the file you load this can be anything like:
- 'world' file: in that case 'result' will be set to the engine.
- 'library' file: 'result' will be 0.
- 'meshfact' file: 'result' will be the mesh factory wrapper.
- 'meshobj' file: 'result' will be the mesh wrapper.
- 'meshref' file: 'result' will be the mesh wrapper.
- 'portals' file: 'result' will be the portal's mesh wrapper.
- 'light' file: 'result' will be the light. Note! In case of a light call DecRef() after you added it to a sector. Note! Use scfQueryInterface on 'result' to detect what type was loaded.
Definition at line 153 of file loader.h.
The documentation for this struct was generated from the following file:
Generated for Crystal Space 2.0 by doxygen 1.6.1
|
http://www.crystalspace3d.org/docs/online/new0/structcsLoadResult.html
|
CC-MAIN-2016-50
|
refinedweb
| 167
| 69.99
|
#include <wx/treectrl.h>
A tree event holds information about events associated with wxTreeCtrl objects.
To process input from a tree control, use these event handler macros to direct input to member functions that take a wxTreeEvent argument.
The following event handler macros redirect the events to member function handlers 'func' with prototypes like:
Event macros:
Constructor, used by wxWidgets itself only.
Returns the item (valid for all events).
Returns the key code if the event is a key event.
Use GetKeyEvent() to get the values of the modifier keys for this event (i.e. Shift or Ctrl).
Returns the key event for
EVT_TREE_KEY_DOWN events.
Returns the label if the event is a begin or end edit label event.
Returns the old item index (valid for
EVT_TREE_SEL_CHANGING and
EVT_TREE_SEL_CHANGED events).
Returns the position of the mouse pointer if the event is a drag or menu-context event.
In both cases the position is in client coordinates - i.e. relative to the wxTreeCtrl window (so that you can pass it directly to e.g. wxWindow::PopupMenu()).
Returns true if the label edit was cancelled.
This should be called from within an
EVT_TREE_END_LABEL_EDIT handler.
Set the tooltip for the item (valid for
EVT_TREE_ITEM_GETTOOLTIP events).
Windows only.
|
https://docs.wxwidgets.org/trunk/classwx_tree_event.html
|
CC-MAIN-2021-17
|
refinedweb
| 204
| 75.91
|
Write a class called Rational which has instance variables num and den which hold the numerator and denominator values such that the number is always expressed in reduced form. Include the following methods in the class Rational:
(a) A constructor that accepts two integer parameters a and b and creates a Rational object representing the rational number a=b.
(b) A constructor that accepts one integer parameter a and creates a Rational object representing the number a.
(c) A method called setRational which accepts two integer parameters a and b and sets the number to a/b.
(d) A method called setRational which accepts one integer parameter a and sets the number to a.
(e) A method called getNum that returns the numerator in the reduced form expression of the rational number.
(f) A method called getDen that returns the denominator in the reduced form expression of the rational number.
(g) A method called add which accepts two integers as parameters (say, c and d) and updates num (say, holding value a) and den (say, holding value b) such that num/den= a/b + c/d.
final num and den should be in reduced form.
Heres what i have so far..not sure if im even close
Code :
public class Rational { int num; int den; public Rational() { num = 0; den = 1; } public Rational(int a, int b) { if (b == 0) { b=1; } num = a; den = b; reduce(); Rational R = new Rational(num,den); } public void setRational(int a, int b) { if (b == 0) { b=1; } num = a; den = b; reduce(); Rational setR = new Rational(num,den); } public void setRational(int a) { a= a; } private void reduce() { if (num != 0) { int g = gcd(num,den); num = num /g; den = den/g; } } public Rational (int a) { num = a; den = 1; } private static int gcd( int m, int n ) { int mx = Math.max( m, n ); int mn = Math.min( m, n ); int remainder = 1; while ( remainder != 0 ) { remainder = mx % mn; mx = mn; mn = remainder; } return mx; } public static void main(String[] args) { } }
|
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/30406-complex-numbers-need-help-finishing-printingthethread.html
|
CC-MAIN-2015-11
|
refinedweb
| 339
| 58.42
|
hi guys unsure what is wrong. This is the start of my code of hangman but I get an error...
Traceback (most recent call last):
File "python", line 14, in module
TypeError: 'str' object does not support item assignment
import random
listOfWords = ["example", "potato", "python", "rocks","test", "hangman"]
guessWord = random.choice(listOfWords)
print(guessWord)
splitlist = list(guessWord)
print(splitlist)
dash = "_ " * len(splitlist)
print(dash)
while 0 == 0:
guess = input("Guess a letter: ")
if guess in guessWord:
for n,i in enumerate(splitlist):
if i == guess:
dash[n] = guess
The line:
dash = "_ " * len(splitlist)
will give you a string which will be immutable. Not sure if you need a list instead of string:
dash = ["_"] * len(splitlist)
|
https://codedump.io/share/td2diCgNEtxV/1/a-quotquot39str39-object-does-not-support-item-assignmentquotquot-when-replacing-elements-in-a-list
|
CC-MAIN-2017-09
|
refinedweb
| 118
| 67.28
|
Logged In: YES
user_id=595483
Hi, quiver. I don't think we can easily go around this problem
if we have to capture iterators in generator expression.
If you run following, you'll know what I mean.
>>> a = iter("abcd")
>>> b = iter("abcd")
>>> [(x, y) for x in a for y in b]
[('a', 'a'), ('a', 'b'), ('a', 'c'), ('a', 'd')]
I think one possible solution could be, instead of passing
evaluations of iterators in generator expression, make a list
from iterator and, then pass it as argument. For instance,
g = (x for x in iter("abcd"))
will be equivalent to,
def __gen(_[1]):
for x in _[1]:
yield x
g = __gen(list(iter("abcd"))) # see 'list'
- instead of g = __gen(iter("abcd")) .
I'm not sure if I'm in a position to decide to do that way or
not. If the current reviewer (rhettinger) approves it, I'll do
that way. Or else, I think I will post it on the mailing list.
|
https://bugs.python.org/msg45182
|
CC-MAIN-2021-43
|
refinedweb
| 167
| 69.31
|
Important: Please read the Qt Code of Conduct -
Test under xvfb fail after upgrade to qt 5.15.0 for python
I have my Python application which use qt for GUI. I use qtpy as wrapper so my application can use PyQt5 or PySide2 as frontend. And for both backends for version 5.15.0 when tests are executed on CI server it fails with qt widgets test with sigabort (Test for Windows and MacOS pass) .
Test pass on local machine.
What can change in qt that it causes such error? It happens on Github Actions and Azure Pipelines on ubuntu image.
Link to project:
Sample fail:
Okay that is a lot of code to have to look at but what I am not understanding is your statement that you use PyQt5 or PySide2 as your backend as these are frontend pieces and the backend and middletier ought to be pretty much straight python?
So can you explain this a bit more clearly as to how PyQt5 and/or PySide2 are being used as the backend but not the frontend?
My mistake. I use it as frontend, not backend. This fail on test custom widgets
I try to create smaller example.
Here is example
log here
Here is a MUC showing you how to implement the importing of a custom file that contains a custom object note if you have sub-folders that contain files each of these sub-folders should contain one of these
__init__.pyfiles as outlined below.
# Create Folder Called Test # Create a text file in Test rename it: __init__.py # Create a text file in Test rename it: MyPushButton.py # Create a text file in Text rename it: Main.py # Noting goes in __init__.py this is simply used by python to facilitate importing # Place this in your MyPushButton.py file from PyQt5.QtWidgets import QPushButton class BigButton(QPushButton): def __init__(self, Text): QPushButton.__init__(self) self.setFixedHeight(100) self.setFixedWidth(100) self.setText(Text) # Place this in your Main.py file from PyQt5.QtWidgets import QApplication, QWidget, QHBoxLayout from MyPushButton import BigButton class MainDisply(QWidget): def __init__(self): QWidget.__init__(self) Top=300; Left=700; Width=300; Hight=100 self.setGeometry(Left, Top, Width, Hight) self.setWindowTitle('Custom Tester') self.btnPush = BigButton('Push') self.btnPush.clicked.connect(self.Pushed) HBox = QHBoxLayout() HBox.addWidget(self.btnPush) HBox.addStretch(1) self.setLayout(HBox) def Pushed(self): print('Hey You Pushed Me!') if __name__ == "__main__": MainEventHandler = QApplication([]) MainApplication = MainDisply() MainApplication.show() MainEventHandler.exec()
How this respond on my question about changes in Qt 5.15?. My code works with Qt 5.14. It fail on QApplication constructor call.
here is log for both bind (PyQt5 and PySide2) for Qt in version 5.14 and 5.15.
For Qt 5.14 all test pass. For Qt 5.15 it fail on QApplication creation.
On My PC it pass for both version. On server without X (but with Xvfb) it fails.
Okay @Czaki it was not clear in your first post that you were having a versioning issue but I would guess if you implemented things as I have shown the issue would go away that it fails in a later version versus an earlier version means perhaps they fixed the bug that allowed that to work that way
So did you try the example I supplied to see whether it works for you or not in whatever version of Qt you are using?
Also if you can denote exactly where within your source code you are making the Import and there the file you are Importing resides I can look at that too and let you know if there are any basic structural issues within your code -- I import numerous custom objects in my project without any issues at all
@Czaki I am only just now arriving on this thread, and when I look at the GitHub actions now (for PartSeg) I see a lot of "green" (successful) jobs. Did you fix the problem?
Was this your fix?
(I don't use python, but I, too, have run into weird issues with the Azure/Microsoft configuration of the GitHub Ubuntu runner environment.)
@KH-219Design I look like it point to wrong build, but I'm not sure how it can change. Current build are green because I block 5.15.0 release
Maybe this link will work:
In this repository I reproduce this error
It shows that this is problem with QApplication in this line
_qapp_instance = qt_api.QApplication(qapp_args)
where qapp_args is empty list
@Denni-0 This code is test runner (pytest). Main application is run in proper way
And I can run code on linux with running x11 server. Problem happens on server with x emulation with xvfb. But CI provider does not provide server with running x11, so xvfb and Xdummy are only options to test gui components. But I do not know how to setup Xdummy.
But also test runner create QApplication object before test start.
Wow I looked at that custom Main program you have and that is rather ugly (note I do not say that without also extending an offer to help you with that clean up if you want). It would take me quite a while to dissect it and clean it up in order to sort out what you are doing and where any potential issues might reside. So first I would suggest that you consider cleaning it up and making it a bit more clear and concise but that aside let me supply you with this bit of code to maybe help you with your issue...
Note being dependent on another tool means you have just one more layer that can cause issues so maybe instead of using
qtpyyou might want to just use the following more explicit means to handle your "dual" boot
try: from PyQt5.QtCore import pyqtSignal, pyqtSlot from PyQt5.QtGui import QFontDatabase from PyQt5.QtWidgets import QApplication except ImportError: from PySide2.QtCore import Signal as pyqtSignal, Slot as pyqtSlot from PySide2.QtGui import QFontDatabase from PySide2.QtWidgets import QApplication # ------------------------- import argparse .... everything else
The above simply makes sure you are using the same modules that are currently being used in Qt5 as the only current differences between PyQt5 and PySide2 are the Signal and Slot references and that PySide2 still uses the Qt4 version to launch your
MainEventThread = QApplication([])as follows:
sys.exit(MainEventHandler.exec_())where PyQt5 while it currently still supports the Qt4 call (for now) has upgraded to the Qt5 version to launch your MainEventThread as follows:
MainEventHandler.exec()Oh and that PySide2 does not currently support all of Qt5 but this will help you know that right up front when you try to run the program as it will fail on import for PySide2.
Further by simply adding something in the PyQt import that automatically fails you can force it to do a PySide2 implementation
Also when importing other things in conjunction with Qt you should always declare the Qt stuff first as there is a known issue where if you do not then you sometimes get incorrect associations and this can cause all kinds of strange runtime errors and simply by declaring the Qt stuff first forces all subsequent imports to be associate with that specific Qt version
In other place I found that my problems comes from this change:
so I need to install xcb libraries with:
sudo apt-get install -y libdbus-1-3
(not sure if all needed)
I use simple file:
from qtpy.QtWidgets import QApplication, QMainWindow app = QApplication([]) window = QMainWindow() window.show() app.exec_()
with defined
QT_DEBUG_PLUGINS=1in environment to get information which libraries are missing
@Denni-0 This code starts one of 3 applications base on command line arguments. and contains function
_test_importswhich test if all libraries are properly bundled when freezing with pyinstaller.
I'm not sure If you can simplify it.
qtpyis simple wrapper which detect qt libraries and allow to control it if have multiple installed.
Base on PEP-8 all imports of python standard library should be before import of third parts library.
Well keep in mind two things first PEP-8 is not an actually good set of standards as they contain of lot of nonsensical items that have nothing to do with creating quality code and have more to do with restricting someones style to fit someone else's view of what should and should not be -- these kind of standards are rarely ever good standards as basic standards should leave room for style flexibility while covering all those things that are actually good coding practices which those standards also do not fully cover
Next Python-Qt is not pure Python code which is what PEP-8 only applies to and this is just an example of why they are not good standards to follow and/or should be taken with a major grain of salt and only use those things that are actually good generic programming practices as opposed to some self-opinionated preferences. Now if you feel a need to follow PEP-8 as if it were created by some gods then go ahead just know that within Python-Qt there is a statement that if you do not import your Python-Qt files first that (especially in multi-version settings) you may end up with incorrect file associations and all the issues that go right along with that.
Oh as a secondary note I had someone once tell me that those standards are a must to use by anyone and everyone using python and I simply said not if they are not required to do so by whomever is signing the paycheck and again frankly if some company asked me about those standards I would give them my honest opinion which is not to use them as is -- not saying all of it is bad -- but it truly is not something I would adhere to by any stretch of the imagination unless the guy signing my paycheck said I had to which they often do not.
Further if you step into a house that did not use PEP-8 for there coding standards (as they did not consult the gods that forbade this) then you would do a much better job if you figure out their standards (if any) and adhere to these so all the code remains having the same look and feel. BTW you might be surprised to find that this happens (not using PEP-8) more often than not because other experienced software engineers feel the same way I do in that those are not good standards due to their heavy incomplete bias
On a slightly different note -- I think there actually would be a much cleaner and more concise manner to setup a primary file whose job is to determine what set of code to call based on various criterion and then encapsulate the various functionalities as completely autonomous units. Also if the purpose of all the code is to determine if you have missing code libraries and to determine which program to actually launch then -- I am definitely sure I can help you design a much cleaner process than you currently have and do not think it would require a lot of re-coding just some adjustments and clean ups an encapsulating
@Denni-0 Could you point any part of standard python library which may breaks Qt?
I know that there are some third part library (like matplotlib, vispy) which may break when using Qt with wrong import order. But I do not see any option when part of python standard library may break Qt.
@Czaki said in Test under xvfb fail after upgrade to qt 5.15.0 for python:
@Denni-0 Could you point any part of standard python library which may breaks Qt?
Excellent question.
So what you are actually asking is have I dug so deeply into Python-Qt that I have discovered the actual bug that creates the phenomena that Qt denotes is a known issue within their documentation. The answer to that not-so-excellent question is no -- nor do I plan to at this time and perhaps never. Basically dealing with that low-level of coding is not my favorite part of programming I much prefer to solve business related puzzles..
However let me juxtapose a question -- can you point out why PEP-8 denotes that ALL non-direct-python libraries MUST come after the python libraries. Is this sound advice or simply a white-wash with no actual solid basis. I am not saying that it applies to just some libraries because that is not their claim (or at least as you have stated it). There claim is ALL other libraries so the reason for it MUST apply to ALL such libraries and not to just some. Or are you like me, simply taking their advice on the surface and running with it. Personally considering all the other junk in PEP-8 I will always take whatever they say with major grain of salt as to me their so-called standards speak of incompetency and if you have that on some level within something that important you generally get it throughout. Again I am not saying all of what they outline is bad, as they may have pulled some of their guidelines from actual quality standards but I am saying there is enough garbage in their to make them only a mild suggestion and that everything and I mean EVERYTHING they claim ought to be investigated with a fine tooth comb prior to adopting it for ones own standards for coding. Is it truly the issue they claim it to be or is it just some of their incompetent (note denoting something as standard guidelines that is simply a style preference and has no basis for quality coding is what I refer to as part of their incompetency) garbage that they chose to toss on the heap.
As for the Qt quandary investigate it yourself as I always tell my students, do not take what anyone else says as gospel, learn about it yourself and that goes double for anything I teach. I am not omnipotent and I can make mistakes but unless someone proves me wrong I stand on what I have learned to be true. So far no one has stepped forward to actually prove me wrong with actual facts, they would rather just say hey I do not agree with you so you must be wrong - so prove to me you are right. To which I reply prove that I am wrong or simply continue to choose to do it the wrong way that is YOUR choice. As for me, my stance was based on actual personal extensive research into the why's and how's and I am now just sharing the results. The Truth is Out There -- YOU just sometimes have to wade through a lot of garbage to find it.
So @Czaki and @JonB you make your own decision based on what you have found to be True assuming you bother to do so. I would challenge you both not to be mere lemmings and blind followers of PEP-8 become instead informed and educated programmers in the know. Find the arguments for both sides, dig deep into the why's and how's before you choose to adopt something as your own standard for doing something correctly.
Oh and these kind of issues tend to be compiler issues rather than actually bugs in language implementation so perhaps if you want to better understand this you need to better understand the compilers that are being used.
As I final note I will always speak out harshly against anything that is being toted as the Truth that is choke full of Lies and that is how I see PEP-8. It might contain truths but it also contains lies as such it is not to be trusted by any stretch of the imagination and definitely ought not to be followed so blindly as seems to be the case by some
@Czaki I'm laughing (at myself!) as I write this, because I'm about to say almost the same thing I said yesterday.
I'm not crazy! And I have read the latest messages.
Seeing that this was this still unsolved this morning, and given that I still have a lot of personal curiosity about things that go wrong in GitHub CI specifically, I came here thinking "well the post did link to a minimum repro of the issue."
So I decided I would dig in and fork the "Czaki/sample_qt_error" repository and see if I could diagnose it.
But when I go to it looks like there is a "green" (successful) job. So was this the fix?
If so, congrats on narrowing it down, and thank you for sharing this with the forum. (And sorry we did not provide the answer!)
If I am once again misinterpreting a "green" job, then today I will actually collaborate if this is still an ongoing problem.
In this case, there seems to be ample evidence that simply upgrading to Py Qt 5.15.0 is the change that broke the CI jobs.
However, I thought I should share one "trick" I have been using on GitHub CI, which has helped me identify cases where changes made by GitHub have been the sudden cause of broken CI jobs.
In my CI job script:
I print into the CI log the following information before I start my compilation and test commands:
# Try various ways to print OS version info. # This lets us keep a record of this in our CI logs, # in case the CI docker images change. uname -a || true lsb_release -a || true gcc --version || true # oddly, gcc often prints great OS information cat /etc/issue || true # What environment variables did the C.I. system set? Print them: env
If a job suddenly fails one day, then I download the GitHub logs from the most recent SUCCESS and from the failure jobs, and I do a diff of the log. This can reveal changes in the GitHub runner environment.
GitHub also prints similar info at the top of each job's log. Their built-in descriptive logging looks like:
2020-06-02T01:29:00.2554826Z ##[section]Finishing: Request a runner to run this job 2020-06-02T01:29:04.9619878Z Current runner version: '2.263.0' 2020-06-02T01:29:04.9642595Z ##[group]Operating System 2020-06-02T01:29:04.9643252Z Ubuntu 2020-06-02T01:29:04.9643359Z 18.04.4 2020-06-02T01:29:04.9643510Z LTS 2020-06-02T01:29:04.9643685Z ##[endgroup] 2020-06-02T01:29:04.9643850Z ##[group]Virtual Environment 2020-06-02T01:29:04.9643978Z Environment: ubuntu-18.04 2020-06-02T01:29:04.9644127Z Version: 20200525.2 2020-06-02T01:29:04.9644336Z Included Software:
So when jobs fail mysteriously, I have tended to "diff the logs" specifically to check that no GitHub changes happened.
Normally I adhere to "the first rule of bug hunting is assume the bug is your own." But GitHub Actions as a feature is in its infancy still, and they have made many rapid changes and I have seen it break several times. (see: and)
I offer this all as "future advice". Again, I see that the change to Qt 5.15.0 is sufficient to explain the broken jobs for @Czaki .
Well okay so the issue has to do with the from Qt 5.14.0 to Qt 5.15.0 but this is only a minor adjustment which typical does not break major things and as such my suspicions lie with the qtpy layer as being where the issue actually resides -- aka the change in 5.14 to 5.15 has not been handled within qtpy and it is qtpy that is breaking due to this change that does not actually break Qt in any way..
Again as always adding extra layers that one has no control over that can effect your code so dramatically is rather dangerous which is why I would stray away from it and perhaps see if there were a better methodology that would do the same thing and reduce the complexity level K.I.S.S. (Keep It Simple and Smart) is always the best rule of thumb in programming
Nice insights btw @KH-219Design those I am sure can prove helpful to anyone using Github
@KH-219Design I fix it and describe here: (few post above).
And I was sure that problem is in changes in Qt, because exactly same code works for QT 5.14.2 and fail for Qt 5.15.0 at same time, at same machine. (one Github Actions job)
@Denni-0 said in Test under xvfb fail after upgrade to qt 5.15.0 for python:.
Could you provide link to article about this? Because I open
Pyside2/__init__.pyand it starts from
from __future__ import print_function import os import sys
Which is import of Python standard library
If you
import sysand check
len(sys.modules.keys())you will got information that above 60 modules are already loaded (Python 3.6.8 on Linux gives 63 modules loade). So import of any of this module cannot break Qt.
Next Python-Qt is not pure Python code
I see no difference for Qt and other libraries. Many of common used libraries has extensions in C ore are simply wrapper for C/C++ library.
This putting all python standard library elements before third party libraries id good idea to see dependences of code. all packages from second section need to be installed using package manager.
I understand how compilers works. I even write simple one. You produce wall of text which is not connected with my issue without any arguments. Breaking PEP-8 block usage of many tools which do code analysis, which speed-up and increase of quality of code developing.
Finally it shown that whole problem is connected with drop single library from Qt build..
I do qtpy code review and it is enough.
And I test my code against two version of Qt at same time and one pass and other fail. With exactly same version of other packages (qtpy also) so this need to be connected with Qt change.
I also create short code example which in one version import from
PySide2not
qtpyand it also fail.
@Czaki Ah. I actually clicked through to this time. Fascinating.
Excellent detective work.
@Czaki said in Test under xvfb fail after upgrade to qt 5.15.0 for python:
Could you provide link to article about this? Because I open
Pyside2/__init__.pyand it starts from
Okay the next time I stumble across it I will remember to record it this time. Most times I am in an intensive research mode and I never consider recording where all the bits and pieces are since the purpose is not to educate everyone else but just to educate myself.
Okay it seems you are satisfied with your version of what you have found great I hope it helps you moving forward. I was only throwing out information I have encountered to give you more food for thought as well as things you ought to caution against doing based on my past experiences.
BTW you do know that of the two PySide2 is not the one I would choose to test solidity against because it still uses code elements from Qt4 and of the two PyQt5 is far more up to date on having all the bits and pieces that correspond to the latest versions of Qt5 and as such is less likely to break because something internally is out of sync
My main backend is PyQt5 but it is from another company (Riverbank Computing). But, because of licensing, some people prefer PySide2 over PyQt5. So I try to write universal code. But it happens that some release of PySide2 breaks, when PyQt5 works. But fortunately next releases of PySide2 fixes it.
I start this thread because It fail on both Qt packages, which means that it is connected with Qt code, not python part.
And I show examples based on PySide2 because this package is produced by Qt company.
|
https://forum.qt.io/topic/115418/test-under-xvfb-fail-after-upgrade-to-qt-5-15-0-for-python
|
CC-MAIN-2021-21
|
refinedweb
| 4,056
| 68.1
|
*
A friendly place for programming greenhorns!
Big Moose Saloon
Search
|
Java FAQ
|
Recent Topics
|
Flagged Topics
|
Hot Topics
|
Zero Replies
Register / Login
JavaRanch
»
Java Forums
»
Java
»
Beginning Java
Author
u_int8_t in java
Daniel Botelho
Greenhorn
Joined: Nov 13, 2004
Posts: 6
posted
Jan 10, 2005 08:41:00
0
Hi,
I'm trying to port this c++ code to
java
:
string CryptoManager::keySubst(const u_int8_t* aKey, int len, int n) { u_int8_t* temp = new u_int8_t[len + n * 10]; int j=0; for(int i = 0; i<len; i++) { if(isExtra(aKey[i])) { temp[j++] = '/'; temp[j++] = '%'; temp[j++] = 'D'; temp[j++] = 'C'; temp[j++] = 'N'; switch(aKey[i]) { case 0: temp[j++] = '0'; temp[j++] = '0'; temp[j++] = '0'; break; case 5: temp[j++] = '0'; temp[j++] = '0'; temp[j++] = '5'; break; case 36: temp[j++] = '0'; temp[j++] = '3'; temp[j++] = '6'; break; case 96: temp[j++] = '0'; temp[j++] = '9'; temp[j++] = '6'; break; case 124: temp[j++] = '1'; temp[j++] = '2'; temp[j++] = '4'; break; case 126: temp[j++] = '1'; temp[j++] = '2'; temp[j++] = '6'; break; } temp[j++] = '%'; temp[j++] = '/'; } else { temp[j++] = aKey[i]; } } string tmp((char*)temp, j); delete[] temp; return tmp; } string CryptoManager::makeKey(const string& aLock) { if(aLock.size() < 3) return Util::emptyString; u_int8_t* temp = new u_int8_t[aLock.length()]; u_int8_t v1; int extra=0; v1 = (u_int8_t)(aLock[0]^5); v1 = (u_int8_t)(((v1 >> 4) | (v1 << 4)) & 0xff); temp[0] = v1; string::size_type i; for(i = 1; i<aLock.length(); i++) { v1 = (u_int8_t)(aLock[i]^aLock[i-1]); v1 = (u_int8_t)(((v1 >> 4) | (v1 << 4))&0xff); temp[i] = v1; if(isExtra(temp[i])) extra++; } temp[0] = (u_int8_t)(temp[0] ^ temp[aLock.length()-1]); if(isExtra(temp[0])) { extra++; } string tmp = keySubst(temp, aLock.length(), extra); delete[] temp; return tmp; }
But here they are using "u_int8_t" char...
How can I do this in java?
Best regards,
Daniel Botelho
Peter Chase
Ranch Hand
Joined: Oct 30, 2001
Posts: 1970
posted
Jan 10, 2005 09:07:00
0
The type to which you refer is not a standard "C" type. It must be a typedef. It looks as if it is probably actually an unsigned 8-bit integer, or byte.
How to represent this in Java depends on what you need to do with the data. I didn't look in detail at your posted code, but perhaps what you really have is text (a zero-terminated array of unsigned 8-bit integers could represent a text
string
). In that case,
java.lang.String
might be best. Alternatively, to treat each integer as a separate piece of data, you could use the Java "byte" class (but remember it's signed - that doesn't matter in many operations, but could be crucial in some) or perhaps Java "char" (but remember Java characters are Unicode).
Probably, rather than try to do a direct line-by-line port of the "C" code, you should step back, work out what it does, and re-code in Java.
Betty Rubble? Well, I would go with Betty... but I'd be thinking of Wilma.
Daniel Botelho
Greenhorn
Joined: Nov 13, 2004
Posts: 6
posted
Jan 10, 2005 09:54:00
0
Hi,
Thanks for your quick answering!
Yes, I had already code it in java, but what happens is that some characters are not being well calculated...
This code above should be used to generate a key string form a lock string.
For example I�ve written the following code in java:
public class NewEncyptionHandler { public static String calculateValidationKey(String lock) { int len = lock.length(); int[] key = new int[len]; key = computeKey(key,lock.toCharArray(),len); key = nibbleSwap(key,len); return dcnEncoding(key,len); } /** * Except for the first, each key character is computed from the corresponding * lock character and the one before it. * If the first character has index 0 and the lock has a length of len then; * The first key character is calculated from the first lock character and the last * two lock characters */ private static int[] computeKey(int[] key,char[] lock, int len){ key[0] = lock[0] ^ lock[len-1] ^ lock[len-2] ^ 0x05; for(int i=1; i<len; i++) key[i] = lock[i] ^ lock[i-1]; return key; } /** * Next, every character in the key must be nibble-swapped */ private static int[] nibbleSwap(int[] key, int len){ for(int i = 0; i < len; i++) key[i]=((((int)key[i] << 0x4) & 0xf0) | (((int)key[i] >> 0x4) & 0xf)); return key; } /** * Finally, the characters with the decimal ASCII values of 0, 5, 36, 96, 124, and 126 cannot be * sent to the server. Each character with this value must be substituted with the string * /%DCN000%/, /%DCN005%/, /%DCN036%/, /%DCN096%/, /%DCN124%/, or /%DCN126%/, respectively. * The resulting string is the key to be sent to the server. */ private static String dcnEncoding(int[] key, int len){ String pattern = "/%DCN{0}%/"; String retKey = ""; for(int i=0; i<len; i++){ switch(key[i]){ case 0 : case 5 : case 36 : case 96 : case 124 : case 126 : retKey += java.text.MessageFormat.format(pattern,new Object[]{expandZeros(key[i])}); break; default: retKey += (char) key[i]; break; } } return retKey; } private static String expandZeros(int val) { String s = ""+val; while(s.length()<"000".length()) s = "0"+s; return s; } public static void main(String[] args) { String lock = "tHn[0U1d1lewXCi:b=D^DlKR_L3jM6d+bSf;^BX7\\9cZ?dt"; System.out.println(calculateValidationKey(lock)); } }
For this lock:
tHn[0U1d1lewXCi:b=D^DlKR_L3jM6d+bSf;^BX7\\9cZ?dt
passed to the function I receive this key:
�bS�VFUU�?!���5?�?��?r?�1�?r�%�? S�V����V�?V�
, but I should receive this one:
�bS�VFUU�?!���5������r��1��r�%�� S�V����V��V�
These are the invalid characters:
Position i:19 -> correct='�' (8216) java_code='?' (145)
Position i:24 -> correct='�' (8218) java_code='?' (130)
Position i:41 -> correct='�' (8211) java_code='?' (150)
Position i:45 -> correct='�' (8217) java_code='?' (146)
Position i:48 -> correct='�' (8226) java_code='?' (149)
Position i:58 -> correct='?' (65533) java_code='?' (129)
Position i:64 -> correct='?' (65533) java_code='?' (129)
So I think that this must have something to do with the "char" so I�m trying to use that "u_int8_t" form that c++ code...
Best regards,
Daniel Botelho
Daniel Botelho
Greenhorn
Joined: Nov 13, 2004
Posts: 6
posted
Jan 11, 2005 01:22:00
0
Hi,
Is there any class in java that those the same thing that u_int8_t from C?
Best regards,
Daniel Botelho
[ January 11, 2005: Message edited by: Daniel Botelho ]
I agree. Here's the link:
subject: u_int8_t in java
Similar Threads
Conversion help plz!!!
HTTP request 'GET' example
which is better - array or arraylist??
io Problem ..moving the files one directory to another
Only fun forwards for fun ...
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton
|
http://www.coderanch.com/t/398210/java/java/int-java
|
CC-MAIN-2014-42
|
refinedweb
| 1,117
| 60.65
|
Visual Studio makes programming Web Services a breeze. However, if you, like me, offer programs for download that uses Web Services, you will notice that various content filtering devices like firewalls and proxy servers sometimes block the Web Service calls. Ordinary HTTP traffic is almost always allowed, but some firewalls and proxy servers are (mis)configured to block various content-types - in regards to Web Services, the content-type "text/xml".
This article describes an easy, one line solution for how to bypass content-filtering blockages of SOAP messages.
When you develop systems for the business and/or consumer market, it is imperative that the system works without the need for the user to take special actions in order to get the system to work. As a developer of server based shareware that communicate through the use of Web Service calls to a centralized database, I sometimes get comments from users who couldn't get the system to work. During installation, I already test whether or not HTTP communication is allowed - but this was shown to be inadequate. One could have switched to another communication protocol than SOAP - but then the strongly typed interfaces would disappear. As ordinary HTTP communication is almost always allowed, I needed to find a way to make SOAP messages appear as ordinary HTTP traffic to bypass the content filtering mechanisms.
The solution I chose was to use the Pluggable Protocol interface found in the
System.Net namespace.
Through the use of Reflector and Ethereal, I got a pretty clear picture of how Web Service calls are done in the .NET framework - however, it was this Microsoft article that gave me the idea for the solution.
The article How to Write a Pluggable Protocol to Support FTP in Managed Classes by Using Visual C# .NET describes how you can create your own web protocol. The idea behind this is to wrap the logic needed to support a given protocol in a black box - and let the higher levels of the system communicate indifferent of the actual protocol used.
The protocol designed takes a SOAP request, and encodes it as an ordinary HTTP POST as if it was a standard HTML form that was posted. The content is sent to a proxy page located in the DMZ - which then unwraps the message and relays the message to the desired end-point. Empirical tests have shown that encoding isn't necessary for the SOAP response when it is returned in raw form.
The following three properties have been identified as necessary for the protocol to work:
To use the code, include the code, and register the protocol prefix in the framework and the location of the proxy page which you have put on a server in the DMZ.
... WebRequest.RegisterPrefix("SOAPPROXY:", new ZinoLib.Net.SOAP.SOAPRelayProxyCreator(""); ...
Wherever you want to relay SOAP calls through the proxy, just replace "HTTP:" with the protocol prefix "SOAPPROXY:".
using( MyWebservice.TestWebService websvc = new MyWebservice.TestWebService() ) { bool cont = true; bool usesSoapProxy = false; websvc.Url = ""; while( cont ) { try { return websvc.TestMethod("string1", "string2"); } catch(Exception e) { //Oh my god - our webservice request failed //- let's try the SOAP proxy instead and resubmit the request if( !usesSoapProxy ) { websvc.Url = websrv.Url.Replace("HTTP:", "SOAPPROXY:"); usesSoapProxy = true; } else cont = false; } } }
The code presented in this article has support for both .NET Framework 1.1 and .NET Framework 2.0 - but because of changes to the
HttpWebRequest interface, remember to define the "Conditional Compilation Constant" NET20 if you are going to use it with .NET Framework 2.0.
The SOAP Relay Protocol encodes the destination URL, the SOAPACTION header, and the actual SOAP body as HEX in order to send it as a payload in a standard HTTP POST. Because of the use of HEX, the request will double in size.
If you log the IP address on the destination URL, please be aware that this will be that of the proxy page and not the source client.
The framework for the SOAP Relay Protocol is derived from the source code presented in the Microsoft article: How to Write Pluggable Protocol to Support FTP in Managed Classes by Using Visual C# .NET, but has been modified quite a bit.
One hurdle was how to create an
HttpWebRequest from scratch. Paulo Morgado shows how to do this in his article: HTTP compression in the .NET Framework 1.1 - and based on his ideas, I have made some small modifications in order to make it fit the SOAP Relay Protocol.
The real "magic" is done in the class
ProxyWebRequest. It is derived from
HttpWebRequest, and maintains a
MemoryStream (embedded as the class
ProxyStream) to which the SOAP request is written by the .NET framework. The SOAP request is encoded and sent when the .NET framework calls
GetRequestStream() which encodes the request, sends it, and returns the response back to the .NET framework.
You are free to use this code in any way you wish - both in freeware and commercial programs - free of charge. A small link in an About box etc. will be appreciated - but it is not required.
General
News
Question
Answer
Joke
Rant
Admin
|
http://www.codeproject.com/KB/webservices/SOAPRelayProxy.aspx
|
crawl-002
|
refinedweb
| 857
| 62.07
|
15. Other Haskell utility programs¶
This section describes other program(s) which we distribute, that help with the Great Haskell Programming Task.
15.1. “Yacc for Haskell”:
happy¶
Andy Gill and Simon Marlow have written a parser-generator for Haskell,
called
happy.
Happy is to Haskell what
Yacc is to C.
You can get
happy from the Happy
Happy is at its shining best when compiled by GHC.
15.2. Writing Haskell interfaces to C code:
hsc2hs¶.
15.2.1. command line syntax¶
hsc2hs takes input files as arguments, and flags that modify its
behavior:
-o FILE,
--output=FILE
- Name of the Haskell file.
-t FILE,
--template=FILE
- The template file (see below).
-c PROG,
--cc=PROG
- The C compiler to use (default:
gcc)
-l PROG,
--ld=PROG
- The linker to use (default:
gcc).
-C FLAG,
--cflag=FLAG
- An extra flag to pass to the C compiler.
-I DIR
- Passed to the C compiler.
-L FLAG,
--lflag=FLAG
- An extra flag to pass to the linker.
-i FILE,
--include=FILE
- As if the appropriate
#includedirective was placed in the source.
-D NAME[=VALUE],
--define=NAME[=VALUE]
- As if the appropriate
#definedirective was placed in the source.
--no-compile
- Stop after writing out the intermediate C program to disk. The file name for the intermediate C program is the input file name with
.hscreplaced with
_hsc_make.c.
-k,
--keep-files
- Proceed as normal, but do not delete any intermediate files.
-x,
--cross-compile
- Activate cross-compilation mode (see Cross-compilation).
--cross-safe
- Restrict the .hsc directives to those supported by the
--cross-compilemode (see Cross-compilation). This should be useful if your
.hscfiles must be safely cross-compiled and you wish to keep non-cross-compilable constructs from creeping into them.
-?,
- Display a summary of the available flags and exit successfully.
-V,
--version
- Output version information and exit successfully..
15.2.2. Input syntax¶
All special processing is triggered by the
# operator. To output a
literal
#, write it twice:
##. Inside string literals and
#:
#include <file.h>,
#include "file.h"
- The specified file gets included into the C program, the compiled Haskell file, and the C header.
<HsFFI.h>is included automatically.
#define ⟨name⟩,
#define ⟨name ⟨value⟩,
#undef ⟨name⟩
- Similar to
#include. Note that
#includesand
#definesmay be put in the same file twice so they should not assume otherwise.
#let ⟨name⟩ ⟨parameters⟩ = "⟨definition⟩"
-
#parametersyntax.
#def ⟨C_definition⟩
- The definition (of a function, variable, struct or typedef) is written to the C file, and its prototype or extern declaration to the C header. Inline functions are handled correctly. struct definitions and typedefs are written to the C program too. The
inline,
structor
typedefkeyword must come just after
def.
#if ⟨condition⟩,
#ifdef ⟨name⟩,
#ifndef ⟨name⟩,
#elif ⟨condition⟩,
#else,
#endif,
#error ⟨message⟩,
#warning ⟨message⟩
- Conditional compilation directives are passed unmodified to the C program, C file, and C header. Putting them in the C program means that appropriate parts of the Haskell file will be skipped.
#const ⟨C_expression⟩
- The expression must be convertible to
longor
unsigned long. Its value (literal or negated literal) will be output.
#const_str ⟨C_expression⟩
- The expression must be convertible to const char pointer. Its value (string literal) will be output.
#type ⟨C_type⟩
- A Haskell equivalent of the C numeric type will be output. It will be one of
{Int,Word}{8,16,32,64},
Float,
Double,
LDouble.
#peek ⟨struct_type⟩, ⟨field⟩
- A function that peeks a field of a C struct will be output. It will have the type
Storable b => Ptr a -> IO b. The intention is that
#peekand
#pokecan be used for implementing the operations of class
Storablefor a given C struct (see the
Foreign.Storablemodule in the library documentation).
#poke ⟨struct_type⟩, ⟨field⟩
- Similarly for poke. It will have the type
Storable b => Ptr a -> b -> IO ().
#ptr ⟨struct_type⟩, ⟨field⟩
- Makes a pointer to a field struct. It will have the type
Ptr a -> Ptr b.
#offset ⟨struct_type⟩, ⟨field⟩
- Computes the offset, in bytes, of
fieldin
struct_type. It will have type
Int.
#size ⟨struct_type⟩
- Computes the size, in bytes, of
struct_type. It will have type
Int.
#alignment ⟨struct_type⟩
- Computes the alignment, in bytes, of
struct_type. It will have type
Int.
#enum ⟨type⟩, ⟨constructor⟩, ⟨value⟩, ⟨value⟩, ...
- A shortcut for multiple definitions which use
#const. Each
valueinstead of a
value, in which case
c_valuemay be an arbitrary expression. The
hs_namewill be defined as having the specified
type. Its definition is the specified
constructor(which in fact may be an expression or be empty) applied to the appropriate integer value. You can have multiple
#enumdefinitions with the same
type; this construct does not emit the type definition itself.
15.2.3. Custom constructs¶
.
15.2.4. Cross-compilation¶:
#{const_str}
#{let}
#{def}
- Custom constructs
|
https://downloads.haskell.org/~ghc/master/users-guide/utils.html
|
CC-MAIN-2019-13
|
refinedweb
| 779
| 60.21
|
Your message dated Mon, 23 Aug 2010 14:12:42 +0200 with message-id <20100823121242.GI12469@melusine.alphascorpii.net> and subject line Package got removed has caused the Debian Bug report #16815, regarding tkfont: wish.) -- 16815: Debian Bug Tracking System Contact owner@bugs.debian.org with problems
--- Begin Message ---
- To: submit@bugs.debian.org
- Subject: tkfont: wishlist
- From: Yann Dirson <ydirson@a2points.com>
- Date: Thu, 8 Jan 1998 01:48:32 +0100
- Message-id: <199801080048.BAA13387@ppp37.a2points.com>Package: tkfont Version: 1.1-1 Severity: wishlist Here are a list of suggestions sent to the original author, but which bounced back into my mailbox. > * restricting display according to XLFD parts (just like xfontsel) > would be nice. This would be really useful, as xfontsel doesn't have > scrollbars in its menus, and I sometimes have to inspect directories > with > 200 fonts. > * editting fonts.dir should IMHO not be allowed, as changes would be > overwritten by mkfontdir. I think only fonts.alias should be edited. > * uncompile pcf => bdf (eg. using a font server and asking him the pcf > data.) > * it would be nice to be able to sort the fonts according to XLFD > fields. Right now it can be damn difficult to find closely-related > fonts. (restricting display, as already suggested would already help > much in this respect) > * specifying custom sizes for scalable fonts should be easier than > editing the bottom line. > You seem to be puzzled by the last XLFD field. As far as I can tell, > the actual font-encoding is defined by both of the 2 last fields. Eg > you can have a latin1 font (ending in -iso8859-1) and a latin2 font > (-iso8859-2), as well as others (-iso2022-0, -iso2022-1, etc.). > > The XLFD standard (from X docs, on any X mirror) calls them > > * "charset registry" is the registration authority (namespace handle > by the X consortium) eg: "iso8859", "iso2022", etc. > * "charset encoding" has only a meaning when associated with the > registry, and is defined by this authority. > > You may find many interesting informations in this document. -- Yann Dirson <ydirson@a2points.com> | Stop making M$-Bill richer & richer, alt-email: <dirson@univ-mlv.fr> | support Debian GNU/Linux: debian-email: <dirson@debian.org> | more powerful, more stable ! | Check <>
--- End Message ---
--- Begin Message ---
- To: 518160-done@bugs.debian.org, 518158-done@bugs.debian.org, 16815-done@bugs.debian.org, 486767-done@bugs.debian.org
- Subject: Package got removed
- From: Alexander Reichle-Schmehl <tolimar@debian.org>
- Date: Mon, 23 Aug 2010 14:12:42 +0200
- Message-id: <20100823121242.GI12469@melusine.alphascorpii.net>Version: 1.1-13+rm Hi! As the package got removed from the archive (please see for details) I hereby close these bug reports. Best Regards, Alexander
--- End Message ---
|
https://lists.debian.org/debian-qa-packages/2010/08/msg00280.html
|
CC-MAIN-2018-13
|
refinedweb
| 452
| 59.9
|
The Ten Boxes of Heterodoxy, or Why Economics Sucks
"I sound my barbaric yawp over the roofs of the world.".
Box No. 11: PUBLIC GOODS Public goods are a product or service that benefits the whole community. They are not optimally or well distributed by "free markets". This is primarily because they are characterized by: (1) value that has benefits everyone, even those who do not purchase them (i.e. immunizations protect the unimmunized by herd immunity - these are socially desired free riders!), (2) often require large investment costs that are too expensive for any individual or corporation to make by itself and earn a reasonable rate of return, (3) require a higher level of administration than any individual or company can arrange and (4) have value that accrues over time and is difficult to price properly. In the current ideological distortion of economic fundamentals the police, the military, water resources, communication frequencies, among other items are being privatized though at high expense and inefficiently.
July 11, 2007 8:59 PM | Reply | Permalink
I know something of economic theory and the jargon used, so following some of this was not too much of a challenge, but the use of acronyms really does detract from otherwise compelling argument. The unemployment meme has been challenged at some length recently as has market efficiency so those arguments are familiar and need no additional back ground. This would win lots of arguments with typical republicans who spout talking points but have no sound basis for their assertions, it may not win with those who have looked at the topic in depth, an example can be found in the final section “share holders do not chose managers”, true to some extent, but they do chose which shares to buy and are less likely to buy from badly managed companies….. The piece would benefit from a little more explanation and defining the acronyms before first use.
July 11, 2007 9:10 PM | Reply | Permalink
Bravo!
Especially the item 10. "Economics" is so concerned about appearing scientific that they exclude anything that cannot be measured reliably. Power cannot be measured reliably. It is easily recognized, but quite unmeasurable.
The result? Economic theory is established in a world without power relationships. Granted there are discussions about what happens when a supplier has power over a purchaser or about the arcane issues in Agency theory, but when it comes time to establish government policies, all such discussions are simply ignored. And they are rightfully ignored because there is nothing really useful there. They simply don't offer compelling ideas.
What happens when economics is applied to government? Government should consist of the organizations and systems that provide goods and services which cannot be properly valued by using market based economic transactions, but are clearly needed. Police and fire services quickly come to mind. The current Republican idiots running Texas government are trying to convert many major highways into toll roads and sell them to investors - one Spanish firm comes to mind - I am sure the governor will get a large consulting fee if that one goes through. The theory is that the investors will provide more efficient services. What in fact they will do is provide dead minimal services they can get away with while charging the maximum revenue allowed by law for the 40-50 year period of the contract and take that money out of state somewhere as profit. It's not like there are any real innovations in running a toll road that can be applied to make it more efficient. The art has been around for centuries, and most of the research has been done by governments to begin with. And since a toll road is automatically a monopoly, there is no economic incentive to improve the system.
But that is the level at which economics is applied to government policy. "Free Market - Private investors - Ugh. Good! Government - Ugh. Bad." Yeah, government is bad for the corrupt idiots who don't what anyone watching as the steal the public blind.
My current bitch with economics is the idea that if you want some service provided better, then you structure a financial reward system that rewards the production of better services. Merit Pay for teaching is an example. It would be great if it could possibly work. Find out what economic inputs and teaching processes provide better education outcomes and reward those who provide the better teaching processes when the students demonstrate that at the end of the year they are better educated than they were at the beginning.
Only - we don't know how to measure successful education outcomes well, especially in a period as short as one year. Even if we did, we don't understand the process well enough to know what education techniques good teachers do better than mediocre ones do. We don't even know for sure what part of the education process is a group process and what part is an individual process. To top it off, we cannot tell what education the students bring with them and what they do on their own.
This means that any "merit pay for teachers" system is going to be administered by either supervisors or by a committee using god only knows what for criteria. That means that if if teachers are "Economic Men" and want to maximized their personal income, then their focus needs to be on gaming the system of determining merit pay, rather than on teaching students above a bare minimum level so as to not appear incompetent. Don't educate - teach to the test. Get poor students to drop out, then shift them to other accounting systems so that they don't count against you. Need I describe all the ways No child Left Behind has failed? [Schools used to provide patronage jobs is an entirely different issue, and again is one that cannot be solved by using Economic theory.]
I don't have my thoughts on economics formed into neat little boxes yet as you have, but I do know that economics as a system of thought is merely that - a system of thought that often provides a surprising insight into measurable human behavior, but it is far from the imaginary solution to all things social and group-based. Since it does not measure power, it almost invariably screws up in political situations.
July 11, 2007 9:18 PM | Reply | Permalink
Productivity.
For years we’ve heard the glories of productivity. It’s wonderful, we’re producing more widgets with less human input. But comrades, we can do even better!! For the glory of the economy do your best!
It’s easy to increase productivity in a labor market where workers are fearful of losing their jobs (think heath insurance). Let’s say we have 100 people producing 1,000 widgets per week. Simply fire 10 of them and imply (or threaten) you’re ready to fire more. But of course the goal, 1,000 widgets per week, remains the same for the 90 remaining comrades. Wow, you’ve suddenly increased productivity by 11.1%, it’s magic!
In strictly monetary terms, dollars out for dollars in, you can accomplish exactly the same thing by cutting wages. Keep on employing the full 100 people but cut their wages. The more you cut wages while demanding the workers meet the original production goal the higher the productivity climbs.
So when you hear the “good news” about rising productivity think twice about what this really means.
July 11, 2007 10:11 PM | Reply | Permalink
Max has reached that place where only a red roadster can prevent spontaneous immolation -- probably, a Miata.
July 11, 2007 10:53 PM | Reply | Permalink
I remember first reading Galbraith's The New Industrial State, in particular his discussion of Wisconsin dairy farmers as examples of firms more typical of those in more traditional, unplanned markets, and wondering if Galbraith would have chosen a different example if he'd ever seen his television ask him, "Got Milk?" (One thesis in The New Industrial State is that advertising implies demand management, which, in turn, implies a market dominated by "planning.")
Perhaps a better way to sum up all of these points is that orthodox Economics is all about solving a fundamental problem that no longer exists: finding an allocation of resources that maximizes the amount of met needs in society. In Adam Smit's day, society lacked the capacity to produce enough to meet everyone's basic needs. The problem, then, was how best to allocate inputs.
Today, the fundamental Economic problem is different. Poverty exists despite the fact that we have the capacity to produce enough to meet the basic needs of the entire planet. Today, the fundamental Economic problem resolves around distributing outputs instead of allocating inputs.
We graft new assumptions and methods onto an old model that focuses on allocating inputes in the hopes that it will lead to a satisfactory distribution of output, but there really is no connection between what the orthodox model seeks to describe and the basic problems we need to solve in society today.
This is why, for example, orthodox models have difficulty with employment issues. Employment is both an issue of allocating labor as a resource and an issue of distributing income to workers, and why traditional supply and demand models don't explain things like the effects of a minimum wage. We can see the effect of a minimum wage on allocating labor to firms, but we can't see the effect of a minimum wage on the distribution of income accross the labor force.
July 11, 2007 11:44 PM | Reply | Permalink
Excellent, Max.
July 11, 2007 11:48 PM | Reply | Permalink
Great post. Here's my problem with economics: it puts itself forth as some sort of predictive science with laws that policy makers have to follow. But, the only reason it has any predictive power at all is that we've set up our economy according to economic laws.
The laws of physics describe reality. We didn't create reality, we were born into it. So, we have to obey the laws of physics.
But we create economies. Economists tells us we have to obey economic laws. But, we don't. We can alter the fundamental scheme that those laws describe.
thosethingswesay.blogspot.com
July 12, 2007 4:34 AM | Reply | Permalink
Thanks Bloke. I took out most of the acronyms.
July 12, 2007 4:57 AM | Reply | Permalink
11. Transitivity of preferences. Or rather, not-transitivity of preferences. In arithmetic if a > b and b > c then a > c. And it is an assumption of classical microeconomics that people behave this way too: if you choose chocolate over vanilla and vanilla over strawberry then you will of course choose chocolate over strawberry. But for me personally I will choose chocolate over vanilla and vanilla over strawberry, but offered chocolate and strawberrry I will chose strawberry 2/3 of the time. In my observation this is true of many areas of human behavior - and it damages the Micro 101 arguments pretty badly.
12. Optimizing vs. satisficing. Basic utility theory says that human beings optimize every decision they make. That is, they survey all possible alternatives and choose the one with the absolute highest payoff in every case. However, most humans have neither the time nor the inclination to optimize even a fraction of their decisions and satisfice instead: they look at a few alternatives and pick the best of that much smaller group. Sort of like climbing to the top of the tallest hill within a half-hour drive rather than the finding and climbing the tallest hill in your state. Much more sophisticated micro models have been constructed that take this into account, but these are taught at the graduate level and are absolutely not the simplistic models that are used as policy guides on the Sunday talk shows - or the floor of the Senate.
sPh
July 12, 2007 5:24 AM | Reply | Permalink
Not to razz you too much, but I think that a large problem with economics as a whole, is that a lot of the time it tries to describe emotional, cultural and social behavior through raw numbers. Which just doesn't work. In all of your cases, this is really is what is happening.
Economics can be useful to try and predict various results. Sort of like being an economic weatherman, per se. The problem is that the variables at play are so complex and almost unpredictable at a micro level, to render the prediction unreliable.
I guess what I'm saying is that economics needs to be taken out of the sciences, and put into the humanities, so to speak. This will probably require a complete reboot of the entire field..which is probably a good thing..
July 12, 2007 5:30 AM | Reply | Permalink
Collective goods:
Privacy is just another word for property, both the demand and, hence, supply of privacy -- including police services -- following the distribution of property. This is probably non-linear meaning that those with more and more property may well have less and less privacy unless they command greater and greater police and guard forces.
Note that the US now has the military and police institutions of the British empire and devotes something on the order of 26% of its workforce to "guarding things" -- without having much in the way of economic, personal, or physical security save at "undisclosed locations" where its political economic nomenklature hide out, avoiding political accountability and market discipline alike.
Security, as in "to secure the blessings of liberty for ourselves and our posterity", is a collective good, based on communitarian sources of happiness, including sacrificial and altruistic motives, social cohesion, fraternity, equality, and other traditional qualities of adult life that are elaborately disparaged by deranged, juvenile, racist, or senile libertarians.
(The few economists with a modicum of actual military, political, or civic experience beyond patronage of very wealthy individuals or powerful father-figures tend to be rather heterodox.)
Concievably this collectively consumed good -- security could be privately produced by mercenaries funded by efficient and fair taxation. But, since the Treaty of Westphalia, the traditional way of providing for it in a republic is a "well regulated militia" -- that is to say a universal military obligation coterminous with a universal franchise.
Curiously, the US Constitution is clear and constructive on these points but largely ignored by anglophile legal theorists -- masquerading as economists -- who never went along with substitution of the phrase "pursuit of happiness" for "property".
::JRBehrman
July 12, 2007 5:56 AM | Reply | Permalink
I don't think economics can be considered a true science, either. Statements of the obvious (supply and demand) aren't the same as fundemental "laws" of nature (E=mc squared). Saying that economic, or any other kind of, decisions fundementally are rational always assumes there is something called "rationality" that exists outside individual beliefs, fantasies, needs and obssessions.
Economics should be shifted back to the "by guess and by God" catagory.
July 12, 2007 5:57 AM | Reply | Permalink
The unstated goal of economics is not to better understand the world, but find the way to "optimize" trade. This is like the difference between biomedical research and clinical medical practice. One discipline examine the how and the other just uses whatever tools are available to fight disease.
Economics spends too much time in the clinical phase and not enough in the research phase (in spite of the fancy formulas that academic economists turn out). Models are not data, they are hypothesis. They don't explain anything they just quantify it.
Since the goal of economics is to "optimize" the functioning of the market there is little consideration given to other possible viewpoints. In general "optimizing" means enhancing growth. This can be through improvements in efficiency or productivity, technological change or even controlling markets via monopolies or oligopolies.
The group of ecological economists have spent the last 40 years questioning this assumption. They make the obvious point that growth can't continue in a finite world. Only recently has anyone even started to listen to them, inspired by the looming prospect of peak oil.
If we are to replace growth as a goal with a steady-state society than we can no longer have a capitalist economic system. This change would render useless most of the popular economic work of the 20th Century. It is time for some group of fresh thinkers to start pondering this question. What will a post-capitalist society look like and how can be transition to it with the minimum of dislocation?
--- Policies not Politics
Daily Landscape
July 12, 2007 6:05 AM | Reply | Permalink
Exactly. My short-form version of this is "economics is sociology that thinks it's physics."
July 12, 2007 6:54 AM | Reply | Permalink
I especially appreciate #8. I'm always puzzled by liberals who are unalloyed free traders. If you tell them that environmental regulations, worker safety laws, or corporate taxes are distortions of the market which inhibit economic activity and destroy jobs, they will rightly scoff. But if you say the exact same things about tariffs or trade laws, they'll nod their heads approvingly.
July 12, 2007 6:58 AM | Reply | Permalink
Gross Domestic Product (GDP). Add up all the quantities in the supply and demand models over the year ("final goods and services") and you get GDP.
Huh? To think that all along I thought the GDP was the total market value of all the goods and services produced within the borders of a nation during a specified period.
July 12, 2007 6:59 AM | Reply | Permalink
So much for Marx.
July 12, 2007 7:24 AM | Reply | Permalink
Actually, it's pollution that's the distortion in the market place because it is a cost assumed by society in benefit of the polluter. Environmental regulations correct this.
July 12, 2007 7:30 AM | Reply | Permalink
The entire traditional macro apparatus fails to allow for the interventions of large foreign lenders who aren't dumb enough to believe what the textbooks say.
Actually, they are dumb if they don't invest in the Euro, which isn't in disgraceful decline and therefore diminishing in yield due to deficits.
Well, except for China, who's getting its lent money back via the trade deficit. With budgetary deficits, the US becomes China's sharecropper.
July 12, 2007 7:38 AM | Reply | Permalink
Apropos a liberal blog, how is the enabling foundation of law and courts, weights and measures, certification of purity, etc., factored into economic equations?
Economies aren't even possible absent orderly society, at least somewhere in the world.
July 12, 2007 7:38 AM | Reply | Permalink
Here's my # 11:
11) Natural resources are cheap and endlessly abundant. Labor is scarce and expensive. Thus, the trade off of more resource inputs (fuel, plastic, grain, or whatever) in order to reduce labor costs is always a good thing. The costs to employment and the environment are externalized and so have no impact on profit. And the fact that natural resource extraction is heavily subsidized is not even considered.
Also, I've said this before and I'll say it again. Economics is not a science. The "science" it comes closest to is Asimov's psychohistory.
July 12, 2007 8:04 AM | Reply | Permalink
Not quite, Ad. GDP does not include intermediate goods used to produce other goods in the nation and time period in question, since that would be double counting.
July 12, 2007 8:04 AM | Reply | Permalink
Psychohistory, although conveniently fictional, at least had predictive power.
July 12, 2007 8:12 AM | Reply | Permalink
In my opinion, there are few if any natural laws of economics and those that perhaps once existed melted away when man left the hunter-gather stage of his evolution and moved onto farms and into villages and cities. Further the geometric progression of technology in the past 300 years has also played hob with the existence of natural laws in economics.
Current establishment orientated economics appears to be the creation of the ruling managerial/ownership class in order to justify, by providing iron clad talking points (laws,) that small subset’s usual and far too large slice of the pie. Even the jargon of economic theory sometimes appears to be a device designed to force a period of study, at the hands of an Economics establishment, on those interested economic policy where those with opinions differing from that establishment’s opinions can be weeded out of a rather elite club. See it our way and we’ll allow you to join the club…the furnishings and opportunities are nice.
July 12, 2007 8:28 AM | Reply | Permalink
That was great, Max. I think one could almost go on indefinitely with what the assumptions overlook. Even setting aside social goods, inequalities, externalities, etc., etc., there's so much.
It leaves out the role of government and society in creating markets and competitors. The fiction of a corporation as a person covers this up, as if they were natural. It leaves out the role of governments in creating entire economies, as in the historic role of first canals and then railroads in American industrialization, or highways in later enabling both trucking and a new shape for American living.
It leaves out how much of government that market fans want to privatize exist owing to a market failure, such as public schools because schools no longer were available to all or the NYC subways, which became a single public entity because no one wanted them. Now market fans in constrast blame passenger railroads if they're unprofitable and try to pretend that the changes in airport regulation and air traffic control since Reagan didn't create a problem.
It leaves out the creation of labor markets and the need for them. Without the post-WWII explicit and implicit in tax structure home subsidies, rent regulation in return for tax cuts to developers, college grants, and so on, there wouldn't have been the "ideal" family of before the culture wars as fodder for flourishing central financial districts. And so on. Sometimes, too, the free-market objection takes on selective blinders. Say, more suport exists for a mortgage deduction, as an incentive, than for rent regulation, which disturbs the market, but both were part of the same history.
It of course leaves out the classic criticism from Marx to Krugman that maybe the market really does favor consolidation and not more players.
And I've ranted before about the confusion of ends (more efficient markets) with means (business efforts, when efficiency may result instead from failures and new market entries). I'm not an economist obviously, but why is this all so hard for economists to model?
John
July 12, 2007 9:02 AM | Reply | Permalink
True, GDP is a straightforward arithmetic calculation unencumbered by abstract "supply and demand models."
July 12, 2007 9:16 AM | Reply | Permalink
But if you study the economics of firms, even under orthodox auspices, you find out they don't maximize profits.
Right. They maximize the promotion and self-preservation of powerful individuals, to the expense of everyone else.
For example, say you have a VP who is heading toward retirement. His pension is based on his earnings during his last five years of employment. His earnings include bonuses for increases in productivity and cost management. Therefore, to maximize his bonuses and thus the final value of his pension, he cuts the pensions of all his exployees and lays off a tenth of the work force to increase productivity.
No actual profit is made for the company or the shareholders. All the benefit goes to the individual, and the next VP who comes along will be forced to do essentially the same thing, only worse, because she is expected to exceed the performance of her predecessor. Meanwhile, morale plummets, employee turnover skyrockets, the company has to spend ever more on recruiting, retaining, and training new employees - meanwhile the product suffers and sales drop.
July 12, 2007 9:32 AM | Reply | Permalink
Most economics affirms that some foundation of government to enforce contracts etc. is necessary for capitalism to function. So that's at least one thing I can't complain about.
July 12, 2007 9:42 AM | Reply | Permalink
Free market capitalism assumes the economy is the ecosystem in which the organisms of society must fight to survive, but society in competition is tantamount to war. The economy is the internal, digestive ecosystem of the society. A proof of this is that public debt is an essential percentage of private sector investment. Recycling surplus wealth back through the public sector is necessary to maintain the value of the currency. As with Social Security, there just isn't enough capacity in the private markets to hold that amount of investment, so the government recycles it and provides a promise of return. The question is whether there will eventually be a debt writeoff, or will all government assets be eventually privatized. The highways all become toll roads, private security companies buying warships, selling off parkland, etc.
The problem with running the economy like a game of Monopoly is that when one person controls everything, the game is over and you start again. In real life, this stage is called revolution.
July 12, 2007 10:08 AM | Reply | Permalink
But do they want to pay for it? Do they assign a value to it or only say some amount (unspecified) is necessary? Which economists argue in favor of more of it?
July 12, 2007 10:10 AM | Reply | Permalink
Go read the writings of Oliver E. Williamson as he uses transactional cost analysis to explain why organizations are necessary to internalize and reduce those aspects of uncertainty and opportunism present in the market.
He explains why sometimes markets are too expensive (because of the information acquisition costs required), so they are replaced by the hierarchical allocation functions of organizations. This explains why some functions are performed by markets of relative degrees of freedom, while some functions are marketed by large hierarchical organizations that control an entire supply chain. If you want innovation, you want a market filled with small firms that compete with each other, but it is expensive in terms of unit cost.
If you want low unit cost and high efficiency, then you want a big business, often in an oligopoly or even monopoly market. But the innovation is largely lost, to be replaced by centrally controlled allocation of resources. Such a business model will routinely take monopoly profits, so that the lower costs of production are often not available to the end consumers.
Some of his arguments can also be extrapolated over to the distinction between functions best performed by business and those best performed by government. I haven't seen the argument in Williamson, but government frequently provides essential functions which are quite standardized (how much innovation is possible in the way a toll road or even a public road is run? We are talking about engineering which has developed since the Romans. Government also does the best job of providing services for which the costs cannot be allocated directly to the end users, such as police, fire suppression, justice and defense. The free-rider effects in these areas are tremendous.
As for Marx - discussing Marx is sooo 19th century. For Economics you really want to look at Keynes and post-Keynsian developments. Particularly look at the causes of the Great Depression and the (still imperfect) solutions.
You may note that I am not totally anti-Economist. It's just that in political policies, the original discipline of Political Economics died sometime after the Depression. It can't be found in the Universities or in the halls of Congress - and God Forbid that any living news person or pundit attempts to explain economic theory and apply it to political policy! For every time someone demands an economic solution to a social problem, turn around and ask who gains power from that solution and why they want it installed. The power analysis will be a lot more useful than the economic analysis unless you are looking at central banking and the Federal Reserve. And even then, power is the more important issue.
July 12, 2007 10:16 AM | Reply | Permalink
"The entire traditional macro apparatus fails to allow for the interventions of large foreign lenders who aren't dumb enough to believe what the textbooks say."
Savicky is completely wrong about this. It is a well established part of open economy macroeconomics that the higher the degree of interntional capital (funds) mobility, the less the effect of government deficits in increasing interest rates and the greater the effect of deficits in increasing the deficit in the trade balance. In the liminting case of an economy too small to have a significant effect on world interest rates with perfect capital mobility, deficits have no effect on the interest rate at all and only increase the deficit in the trade balance.
I suggest that he inform himself of this by reading chapter 5 in Mankiw's intermediate Macroeconomics test, including the appendix on the large open economy.
July 12, 2007 10:34 AM | Reply | Permalink
In grad school we occasionally discussed the lack of reliability of Economic predictions in comparison with the predictions by physicists. Our conclusion is that physicists are lucky that the subjects of their calculations do not have minds and goals of their own, and do not read the physics journals to try to learn the latest ways to game the advances in physics.
Economics may have inherent laws, but since the subjects-of-study DO read the journals and game the results, the feedback effects of every reliable discovery of economics cause almost immediate arbitrage effects. The system being studied immediately incorporates the discovery and all we see are the left-over random effects that the theory cannot explain.
July 12, 2007 10:43 AM | Reply | Permalink
"Power looms over economic transactions, except in economic theory. Workers do not hire capitalists."
Sawicky is right on about this one. One of the central features of the New Classical Economics, in both the monetary and real business cycle versions is that cyclical unemployment is caused by the unemployed workers choosing leisure. This proposition blindly ignores the fact that the increased unemployment during a recession is not caused by workers voluntarily quitting their jobs and choosing to remain unemployed but, rather, by being told by their employers that they are no longer needed even though they would have been perfectly willing and even eager to keep working at the wages they were being offered.
It would have been very instructive for some of the new classical economists to go to Flint, Michigan during the Bush I recession and explain to the unemployed auto workers that the reason that they were unemployed was because they were choosing leisure. They would have gotten some teeth knocked out and some ribs broken, but they would have learned a lot more about how economies function during recessions than they were learning from their ivory tower theories.
July 12, 2007 10:45 AM | Reply | Permalink
But this does not prove that economic theory sucks. Economic theory includes New Keynesian economics, accoring to which sticky prices and wages can cause markets to fail to clear during recessions. Rather than condemming all economic theory across the board, Sawicky should recognize that there are important disagreements amoung mainstreeam economic theorists.
July 12, 2007 10:52 AM | Reply | Permalink
Whether something is a science or not does not rest on how good the predictions are. It is all about how the questions are asked regarding any unknown phenomenon.
As I argued above, physicists have an easier to study subject than economists do. Relatively speaking, of course.
Take a social science research methods course and you will learn what true science is all about. It starts with description, goes to hypotheses, proves or disproves them, refines the questions and recycles. The consistent and reliable predictions are sometimes thrown off of that process, but they merely indicate that science has taken place. They are not themselves what science is.
True science is about how the questions are asked and what is done to answer those questions.
[I once had a letter to the editor on the subject published in "Analog," and they garbled it horribly in printing it. Here I have a chance to control for that.]
July 12, 2007 10:56 AM | Reply | Permalink
"1. Supply and demand, 1. This celebrated and most basic economic model while in principle multidimensional in practice obscures anything interesting that affects market conditions."
If economics is not science how do you assess the relative importance of those various "interesting things" The same question applies to other intellectual activities that have even less of a connection to science than economics. I'm going to go off on a bit of a tangent, feel free to skip over this and go to the next comment. I won't mind.
Asocial science and claims from authority:
In the course of trying to follow a discussion on this site recently (in a discussion of censorship ironically enough) I learned things about how this site is that I had not paid enough attention to learn earlier. And that is not only that "karma" marks popularity but also that if your karma is low enough, it blocks access to some information on the site. With a low rating I was literally unable to see certain other comments that others were discussing. That was a shocking realization, though other people seem to take it -not my own experience, but the logic behind it- in stride. For myself I've been troll-rated, by the founder of this site, for statements of fact; statements offered in spite but nonetheless documented as fact. The same thing has happened on another site, of one of the more verbally active blogging neoliberal defenders of classical economics. I've had comments removed for making statements of fact, documented in one case with US Government reports; in that case having to do with the history and actions of Hezbollah.
None of this is simply sour grapes. I've yelled about this elsewhere so there's no point in going on about it now. But the issue is gate-keeping, the watching of watchmen, or the necessity of structures that keep the watchmen in check.
The following are examples of why such structures are necessary:
-There are no advocates for the interests of the Palestinian people among the officially designated representatives of this site. That is as if there were no women and the subject were feminism. I won't even get into race here.
-There are no [almost no?]discussions of foreign policy on this site that are not led by those who predicate their arguments on the logic of the United States being "the Necessary Country." Simply put this is the logical equivalent of the argument that one or another of us is the "necessary man." And of course that's what our President considers himself.
- And of course to get back to the subject of this post, economists think theirs is the necessary measure of man. But thanks to Max, economics is the exception to the rule at TPM. There has been no discussion of heterodox international relations or foreign policy. International political history at TPM is simply American political "science." Where are the foreigners here, even on the discussion of Iraq? Where is Reidar Visser?. Is there even own author here who reads let alone is fluent in Arabic??
Should I even have to ask this question?
Josh Marshall, Brad DeLong, M.J. Rosenberg, Matthew Yglesias and others say "trust us." No.
---
Here's some of today's news
---
and already I've garnered a "1" with no explanation.
July 12, 2007 10:57 AM | Reply | Permalink
Max:
Attacking neo-classical economics is just too easy! A few of points of my own:
1] they unabashedly say they don't care about the creation of wealth, their system is only supposed to be about the allocation of wealth;
2] the theory of the firm has come a long way since the 1930's and no one really thinks firms maximize profits;
3] economics, contrary to the supposition of some of the posts here, cannot predict anything: it's their dirtiest secret that their models fail completely to predict the economy with any precision;
4] the neo-classical agenda is to defend something they call "the market" which they see as some abstraction, but which they cannot define because to do so would introduce real live human beings rather than the automatons that inhabit their models;
5] neo-classicism is so riven through with nonsensical assumptions that they are embarrassed to defend them in public: rational expectations, perfect information, etc are so absurd as to be ridiculous. When they do defend themselves they can only offer up the weak notion that all their effort has given them "some useful insights" into the way the economy works.
6] So divorced from the real world is the orthodox system that pretty much anything we come across in reality is regarded as a "market failure" i.e. it is deficient, and inefficient, with respect to the theory. No other science regards the real world as a "failure" instead they actually seek to explain it!
In short the whole neo-classical enterprise has taken the study of economics down an ideologically conceived dead end. This wouldn't matter except that they have duped politicians of both parties into thinking that economics is a serious science on a par with physics etc. It is not. Nor can it be until the choke hold of neo-classicism is broken. One example: if market allocation is so superior to the alternative, generically known as central planning, why is it that all the firms I know manage themselves using planning rather than market forces? Clearly the market is not "best" for a vast array of transactions.
One final point: two weeks ago the Supreme Court held that companies can restrict the discounting of their products. This represents a clear introduction of a market inefficiency from the orthodox point of view: surely markets [if they are as wonderful as we are led to believe] should be allowed to discount. I would have thought the neo-classical crowd would be up in arms attacking the Court's finding. But no. One of the groups siding with big business were the infamous economists from Chicago, the epicenter of orthodoxy! Their argument was that by allowing prices to be fixed at higher levels we would encourage and protect firms who innovate thus increasing our choices and increasing long run competition. It's amusing that they never seem to apply price rigging to labor costs. A higher minimum wage would surely have the same effect in the labor markets as price fixing in the product markets has.
Let's admit it: economics professors do not study or teach anything about the economy; they study and teach about their own hermetically sealed right wing utopian system.
We should build an alternative while ignoring and de-funding the neo-classicists.
'All Life is Problem Solving'
July 12, 2007 10:58 AM | Reply | Permalink
"Recycling surplus wealth back through the public sector is necessary to maintain the value of the currency. As with Social Security, there just isn't enough capacity in the private markets to hold that amount of investment, so the government recycles it and provides a promise of return."
This is complete nonesense. Currently the saving of the private sector of the U.S. economy is insufficient to finance both the borrowing of firms to finance their physical investment and the government to finance its deficits, so that the U.S. economy must borrow heavily from the rest of the world, including the Chinese central bank.
July 12, 2007 11:03 AM | Reply | Permalink
"I guess what I'm saying is that economics needs to be taken out of the sciences, and put into the humanities, so to speak. This will probably require a complete reboot of the entire field..which is probably a good thing.."
NO, NO, NO!!!
The problem with economics is that ideology plays much too much of a role, and that there is too little science. While economics can never achieve the level of accuracy of the physical sciences, it can and should be make much more scientific.
The first step: No published empirical result, no matter how carefully the researh has been conducted, should be accepted as established until it has been independently reproduced by other researchers working independently.
July 12, 2007 11:12 AM | Reply | Permalink
"economics, contrary to the supposition of some of the posts here, cannot predict anything: it's their dirtiest secret that their models fail completely to predict the economy with any precision;"
Firms pay economic forecasters very good money for their professional forecasts. Are they really just being stupid in doing this or do they recognize that they are getting useful information. Yes economic predictions are much less accurate than the preditions of say astronomers but they do provide useful information.
July 12, 2007 11:17 AM | Reply | Permalink
"Their argument was that by allowing prices to be fixed at higher levels we would encourage and protect firms who innovate thus increasing our choices and increasing long run competition."
The implications of this assertion have not been fully realized. This argument is simply invalid in a perfectly competitive economy where there is freedom of entry and exit and firms sell undifferentiated goods. But a case for it (not neccessarily a valie one) can be made in an economy that consists of differentiated oligopolies. This is a tacit admission by the Chicagoans that real world market economies are not approximations of perfectly competitive market systems, but, rather, are differentiated oligopolistic systems.
Traditionally the Chicago school had argued that while actual market economies were not pricesely like the perfectly competitive model, they were close enough approximations thereof, so that the results predicted by the theory of perfect competion could be applied to them. If they are dominantly differentiated oligopolies, this argument no longer holds.
July 12, 2007 11:29 AM | Reply | Permalink
"Let's admit it: economics professors do not study or teach anything about the economy; they study and teach about their own hermetically sealed right wing utopian system."
This is dishonest. The entire economics profession does not consist of Chicagoans or other right wing ideologues.
July 12, 2007 11:31 AM | Reply | Permalink
The government functions of providing essential infrastructure are paid for out of taxes, of course. They are the public equivalent of overhead to a private organization.
Don't let the Libertarian and Republican free-marketers fool you. They either ignore the costs of the social overhead of the economy, or they try to shift those costs from themselves to someone less able to avoid paying those costs.
When the social costs of the SEC and auditing standards are not paid then the financial markets collapse. That was what led to Enron and WorldCom. The super-smart guys realized that no one else avoiding paying the costs of good audits, but they could get away with it. All they had to do was convince Congress that they did not need such "leashes." And they did. Sen. Phil Gramm as head of the Banking Committee did a good job of removing financial controls on business, and his wife Wendy working at the SEC rewrote the regs so that Enron was not covered. She then went on Enron's Board of Directors when she left the SEC.
The failed infrastructure of decent auditing led to the growth and then failure of Enron in particular. Instead of the taxpayers paying, the costs were shifted to the stockholders, the employees, and the electric rate-payers, particularly in California. [I am less personally knowledgeable about Worldcom and the earlier Sunbeam cases.]
I expect to see the reintegration of the old Baby Bells to result in similar costs. I just don't know when or who will actually pay those costs.
This was political economics in action, and while the accounting industry still discusses it, Economists rarely do. Nor, to my knowledge, do Political Scientists or Policy people.
Broadband should be a free economic infrastructure item as it is in countries like South Korea. The economy will be less robust with the reintegrated AT&T in control. But the guy with the power to control the business today sets the rules, preventing entrepreneurs from growing up and creating new businesses here in the U.S. If those potential entrepreneurs could afford lobbyists, then they would already be rich and would also be fighting to prevent changes in the economy that destroyed the value of their sunk costs. Today's wealthy hate Schumpeters' creative destruction in the economy. That's why there are so many super conservative millionaires (usually the ones who inherited the fortune, not the one who created it.) They are the ones financing the conservative "think" tanks - to protect their fortunes and status.
July 12, 2007 11:32 AM | Reply | Permalink
What about that most fundamental of all boxes -- rational-actor theory? Optimizers uber alles!
July 12, 2007 11:36 AM | Reply | Permalink
And those forecasts, to have any chance of success, have to ignore the basic tenets of orthodox economics. Any prediction made on the basis of text book economics has little or no value: are the people who you proffer as examples using the text book, or are they using some other system/technique they find useful? Besides, as you know full well, any addition of information will reduce the uncertainty of planning in business which is an inherently risky proposition. So paying for forecasts and gathering as much advice as you can makes sense even if the forecaster is using some personal "know how" they happened to have picked up along the way. To turn your point around: if economics was as successful as it claims to be why don't all businesses employ economists to forecast for them? If the value was obvious they surely would.
Why does the head of the Federal Reserve Board, who teaches orthodoxy, ever have to guess at anything? Why can't he predict accurately what rates are going to be? The theory he teaches states unequivocally that I, let alone he, know, what those rates will be: it is an essential part of the theory that everyone knows everything. Take away rational choice and perfect information etc and the theory erodes to nothing more than a few "insights" very quickly. I don't have a problem with that erosion: it might open the door to new and better theorizing. When physicists theorize about gravity they seek more than a few insights, they seek to explain and predict in the real world. Orthodox economics does neither.
Businesses employ far fewer economists than they used to for precisely my reason: economics, at its heart, is not about the real economy, it is about some hypothetical one where real people don't live. People like you who probably don't know exactly what your preferences will be ten years from now.
I do not undervalue those insights, but to dignify them as a successful theory is foolish and misleading. It is unethical for economists to represent that they have more than a few insights: real people are affected by the application of economist's theories. It would be nice to think that those theories were in contact with reality.
'All Life is Problem Solving'
July 12, 2007 11:43 AM | Reply | Permalink
"they unabashedly say they don't care about the creation of wealth, their system is only supposed to be about the allocation of wealth;"
This just shows the poster's ignorance. As nobel prize winning economist Robert Lucas (yes, he is a Chicago school economist) has said about economic growth "The consequences for human welfare involved in questions like these are simply staggering: Once one starts to think about them, it is hard to think of anything else."
July 12, 2007 11:43 AM | Reply | Permalink
"The theory he teaches states unequivocally that I, let alone he, know, what those rates will be."
This is incorrect. The theory allows for errors due to events occurring after one's expectations hhave been formed.
But nevertheless, the dominance of the theory of "rational" expectations as the established theory of expectations formation is the most serious handicap in current macroeconomics. It ignores the fact that the gathering of information and the evaluation of its economic implications can be very expensive, so expensive that most people are rationally ignorant of the things "rational" expectations assumes they know.
For macroeconomics to make real progress in becoming truly scientific, this defective theory of expectations formations has to be replaced.
July 12, 2007 12:00 PM | Reply | Permalink
I don't know how else to put, but, God, what a parade of ignorance. Almost every single one of those is way wrong. I don't know if Max really doesn't know what he's talking about, is just being cute, or knowns that he can get away with lying to a gullible audience. In any case, he's done more than any one else to convince me that most of the so-called heterodox really DON'T know what they're talking about and SHOULD be ignored. Before I thought there was something to it.
July 12, 2007 12:13 PM | Reply | Permalink
We desperately need some balance here. Hopefully Krugman of DeLong will chime in as they have in the past.
July 12, 2007 12:16 PM | Reply | Permalink
Max: "Most economics affirms that some foundation of government to enforce contracts etc. is necessary for capitalism to function. So that's at least one thing I can't complain about." That wasn't a reply to me, Max, but it was a point I'd made, so let me apologize. I did have that at the start of a comment, but ineptly, as it was meant to be part of a comment about how more generally private markets and their success/failure are hard to separate from public goods.
Anyhow, great job, and Captain at least granted you the usual free market concession about "sticky wages." Oh, and the point that, of course, the standard model has already taken all criticism into account thanks to Keynesians, except of course for Keynesian criticism of assumptions or outcomes.
John
July 12, 2007 12:16 PM | Reply | Permalink
Captain:Thank you. Allow me to correct myself: this thread is predominantly about orthodox theory. I agree that the entire economics profession includes all sorts of other people with a variety of other ideas: the Post-Autistic Economics web site is evidence of that. The problem is that they do not get much coverage, and they certainly do not influence public policy. Max started us off by attacking orthodoxy: with that restriction my comment stands.
Perhaps you know: how many heterodox economists are there in [say] the top ten economics faculties? And do the undergraduate classes at those schools teach anything other than orthodoxy? For example: is transaction cost economics [mentioned by someone else above, and hardly radical] part of a core curriculum? How about evolutionary economics? Marxist economics? Feminist economics? I agree that economics is a diverse subject, just not in the standard texts, most schools, and in public policy.
And besides, last I looked, the economy, the one we actually live in, is defined by its imperfections and asymmetries, by it unknowns and its variety. Teaching economics should surely begin with that empirical background. Thus my point: economists as long as they begin with perfection are not talking about the economy. Let me repeat: this does not decry the insights they have picked up along the way, it just means they don't have a theory sufficient to anchor public policy in the way they often represent.
I do not want to imply that all economists are "ideologues": but I do believe that if they teach orthodoxy they are perpetuating a very weak theory in the guise of a stronger truth. That is unethical. There is nothing wrong with saying we just don't have a good theory, but that we are working on it. That is not what most economists do: they seem to argue that they have "the answer". I just don't think they do. I suppose my larger point is that if economists want to influence public policy they had better be right in their theorizing: they owe it to society. I wouldn't want to go to a doctor who only had a few insights into health, I would want something a little more robust.
Finally: I have yet to read any op-ed articles by mainstream economists decrying the Supreme Court decision I mentioned above. Perhaps I missed them.
'All Life is Problem Solving'
July 12, 2007 12:17 PM | Reply | Permalink
"The outcome in an supply and demand model in principle has no inherently attractive qualities, in and of itself, since it depends on the distribution of ability to pay."
If the pie is made bigger by achieving efficiency, more goods are available to distribute to the poor. Efficiency will not guarantee that it will happen, that requires equity, but it does provide a greater opportunity to do so than in an inefficient economy where resources are wasted. Also, it provides the possibility for a very good standard of living for all, which a badly inefficient economy does not. Again, efficiency only provides for the possiblility, it does not guarantee it. That requires equity. A good economy requires BOTH efficiency and equity.
Economics does much better with efficiency than with equity because the criteria for efficiency can be unambiguously derived from economic theory itself, while the criteria for equity must come from outside of economics, and since there are many competing concepts of equity there are therefore no unambiguous criteria for judging equity.
July 12, 2007 12:30 PM | Reply | Permalink
Captain: I believe it was Lord Robbins back in the 1930's who wrote that economics was the study of the allocation of scarce resources. His paper helped define the move by the subject into a strictly allocative arena. What else is general equilibrium theory? What else is a Pareto Optimum, than an allocation according to some arbitrary, a priori, assumption? General equilibrium theory is exactly an allocation system and it still is the core of economic theorizing. In orthodoxy, other things like growth and wealth creation are shoe-horned in to defend the core.
Absolutely Robert Lucas and many others "think about them". How kind of them! It's the theory we are discussing not their personal thoughts.
And forgive my ignorance. I don't think we should be throwing personal brickbats at each other. I take it you care deeply about economics as I do. The subject is profoundly important: this thread is an example of that!
'All Life is Problem Solving'
July 12, 2007 12:37 PM | Reply | Permalink
I tried to balance that 1 rating, but it won't let me. Strange.
July 12, 2007 12:37 PM | Reply | Permalink
Captain: Precisely and well put. The problem is that efficiency is itself an elusive topic. The argument I have with you is that you seem to believe that economics has resolved what "efficiency " is. I don't. For instance is efficiency really rooted in rational choice theory as general equilibrium theory would have us believe? Or is rationality bounded by our inability to see and understand everything? If the latter then orthodox theory is flawed and doesn't give us the answers you argue it does.
It is not as simple as orthodoxy preaches.
'All Life is Problem Solving'
July 12, 2007 12:43 PM | Reply | Permalink
Usually if you log off TPM, close your browser, reopen it, and log back on to TPM those problems resolve themselves.
However I have a hard time seeing how complaints about (a) the social networking philosophy of TPMCafe (b) Hezbollah are in any way relevant to a discussion about economics scholarship.
sPh
July 12, 2007 12:44 PM | Reply | Permalink
My objection was with the assertion that economics is ONLY concered with the allocation of wealth.
Theorists like Lucas not only think about the creation of wealth, but there is a tremendous amount of research going on in the area of growth theory, which deals with this topic.
A large part of macroeconomics deals with business cycles and the theory of employment and unemployment, which, except in the case of real business cycle theory, is not fundamentally concerned with the allocation of wealth.
When people make statements that I think are patently ignorant, I say so. No apologies.
July 12, 2007 12:58 PM | Reply | Permalink
Captain: We agree! Rational expectations remains a serious flaw. I happen to think it's fatal. There are plenty of well argued alternatives: bounded rationality being the most obvious. BTW how can there be an "error" in my expectations after the event? Surely they were my expectations as best I knew at the time? They were not in error at all!
'All Life is Problem Solving'
July 12, 2007 1:01 PM | Reply | Permalink
All you say strikes me as accurate excepting the ...may have inherent laws.... At least you said may...
:-)
July 12, 2007 1:10 PM | Reply | Permalink
I wasn't talking about heterodox ecomists, I was defending economists working withing the neoclassical mainstream. People like Krugman, DeLong, Stiglitz and George Akerlof, for example. My own sympathies are, on the whole, with the New Keynesian economists.
July 12, 2007 1:13 PM | Reply | Permalink
I'm not opposed to the study of Economics, I just believe it is a liberal art, all be it an important art, not a science. And like the product of any art, economic theory and its application shines in the eye of the beholder and the affected...or not.
July 12, 2007 1:15 PM | Reply | Permalink
Mmmm, wasn't that content rich.
July 12, 2007 1:22 PM | Reply | Permalink
"BTW how can there be an "error" in my expectations after the event? Surely they were my expectations as best I knew at the time?"
The error is that the outcome is different than what one expected and made one's economic decisions on.
A truly rational theory of expectations formation using the rational behavior criteria in orthox economic theory would be based on the proposition that people will allocate additional resources to the collection of information and the evaluation of its economic implications as long as the expected additional return (marginal expected benefit) is greater than the additional cost (marginal cost) of gathering and evaluating this information. At the point at which these are equal the process would stop and no additional information would be collected and evaluated. If the collection and evaluation of economic information is expensive, and there are good resons for believing that it would often be, people would not collect and evaluate this information because it would be inefficient to do so and be rationally ignorant of it. Therefore using orthodox economic theory one can demonstrate that the orthodox theory of "rational" expectations is not, in general, valid, although it MAY be in some highly organized markets.
July 12, 2007 1:30 PM | Reply | Permalink
radek: I would not defend heterodox economics since it is an amalgam of all sorts of things, many parts of which may have little or no predictive or explanatory power. But: please elucidate your comment about "ignorance". Ignorance of what precisely?
Max's initial purpose, I suspect, was to point out the obvious: that orthodox economics is not as successful a project as its influence on public policy would seem to indicate. When we are setting public policy other subjects have as much, if not more, to say about the economy as orthodox economics does. Those of us who care about economics should take that as a challenge rather than to call anyone who calls the emperor naked "ignorant".
'All Life is Problem Solving'
July 12, 2007 1:34 PM | Reply | Permalink
There is a humorous story about a Hindu economist explaining reincarnation to his class.
"If you are a good, and virtuous, economist, you will be reincarnated as a physicist."
"If you are and bad and evil economist, you will be reincarnated as a sociologist."
July 12, 2007 1:39 PM | Reply | Permalink
Max:
Sorry about that. But the challenge has to be taken up: orthodoxy has to prove itself and I get the feeling it thinks it has already. Speaking as one "market failure" to another I find that difficult to take!
'All Life is Problem Solving'
July 12, 2007 1:42 PM | Reply | Permalink
This is a discussion of the social networking philosophy of economists.
Yes?
The parallels seem pretty clear.
July 12, 2007 1:46 PM | Reply | Permalink
Captain: The dominance of general equilibrium theory as the bedrock of orthodoxy weakens your point. Plenty of auxiliary theorizing has been done on wealth generation etc. It is, however auxiliary, because the core remains allocative. I suspect it has to be that way since to talk about wealth generation substantively introduces subject matter that orthodoxy would prefer to avoid in order to remain "positive".
Until the core is changed economics will remain "the study of the allocation of scarce resources".
Again I apologize, in advance, for my "patent ignorance".
As an addendum may I quote Blaug [1997]:
""
'All Life is Problem Solving'
July 12, 2007 1:54 PM | Reply | Permalink
My accusation of ignorance was not directed at you, but, rather, at the original message I was criticising. It is also directed at some of Sawicky's posings. I find your posts to be intelligent and challenging.
"It is, however auxiliary, because the core remains allocative."
The core of microeconomics is and needs to be allocative. In a fundamental way growth theory must also be about allocation, since it tracks the implications over time of the division of current output between current consumption and the production of goods that will increase consumption in the future.
"I suspect it has to be that way since to talk about wealth generation substantively introduces subject matter that orthodoxy would prefer to avoid in order to remain "positive"."
But theories about economic growth include discussions about the "golden rule level of capital."
July 12, 2007 1:55 PM | Reply | Permalink
"Or is rationality bounded by our inability to see and understand everything? If the latter then orthodox theory is flawed and doesn't give us the answers you argue it does."
Offhand I would answer that the existence of bounded rationality does not change the criteria for efficiency derived from orthodox theory, it only implies that market economies cannot, except perhaps by accident, achieve such efficiency. This is almost certainly the actual case.
July 12, 2007 2:09 PM | Reply | Permalink
" The unnatural rate of unemployment. Economists used to say it was 6.0, maybe 5.5 percent. Lower would give rise to ruinous inflation. The huge social benefits of another couple of percentage points less unemployment were -- are -- implicitly discounted. Current rate is 4.5. 'Nuff said."
Are you really this ignorant or are you dishonestly trying to mislead your readers. It is a well known fact that the "natural" rate of unemployment is not a natural constant, but can change over time, for example, if the structure of the labor market changes. It is well known that it has, in fact changed over time. (O.K. natural rate of unemployment is a misleading term. That is a valid criticism.) The natural rate of unemployment steadily rose from the end of WWII until the 1980s and has since decreased again. Pushing the unemployment rate below 6% in the 1980s would have caused accelerating inflation.
July 12, 2007 2:19 PM | Reply | Permalink
Captain: Thank you. Your explanation sparks a couple of further questions: how do I "know" I have reached the equilibrium between cost and benefit? And at the end of your explanation do I detect what Daniel Dennett would refer to as a "sky hook"? Are you saying that because the cost of finding enough information to be truly rational is likely to be too high [as it surely would] then we can just go ahead irrationally, but we'll call that "rational irrationality" so as to keep the whole edifice up?
Wouldn't it just be better to find a new theory that eliminated the need to be rational in the first place?
The whole system seems filled with dodges to maintain its alleged integrity. Kind of like Ptolemy's astronomy. I suppose that means we're looking for an economics version of Copernicus and Kepler.
And can you cite me an example of a "highly organized market" where orthodox theory alone provides an explanation for its workings?
'All Life is Problem Solving'
July 12, 2007 2:22 PM | Reply | Permalink
Can you please point me to an equation that allows forward-looking predictions of this "natural" rate. Preferably one published by 1980 that accurately modeled and predicted the changes that occurred from 1980 - 2006. Thanks.
.
sPh
July 12, 2007 2:30 PM | Reply | Permalink
Sounds like you buy into the hard science bias of many engineers that I know. You require that the math be applied to limited domains with little interference from outside, thus allowing a complete measurement of the initial status and full knowledge of the processes being applied. That is what is required if you want consistent and reliable predictable results.
In that absence of those conditions, you are calling it an art, but an art is a discipline where the results tell more about the artist than they do about the subject matter the artist is working on.
The Social sciences fit somewhere in between. In the liberal arts it is very difficult to say that the measurements - if any - of the initial status were done correctly, and the procedures to be applied can rarely be called wrong. A professional economist can tell when economics are done wrong. It is a lot more than just perception. That is why economics and political science are not considered arts.
July 12, 2007 2:33 PM | Reply | Permalink
Captain: I totally agree with you here. But in my mind it highlights the challenge economics faces: it seems callous, to put it mildly, to talk of a "natural rate of unemployment" considering the human cost associated with the lack of a job; but the system clearly needs some slack so as to avoid a breakdown. Economics needs some serious PR!
The issue that Max seems to be driving at is the relentless response by orthodox economists that "the market" is better at softening the social cost than government intervention is. It makes economics as a whole, rather than the orthodox only, seem unconcerned with the individual even though their theory is based upon rational individuals. Most regular folks would like their community/government to help ameliorate the burden of unemployment and that implies intervention in the market.
I just don't see how you can separate economics from broader social theory at that point.
'All Life is Problem Solving'
July 12, 2007 2:47 PM | Reply | Permalink
I don't think you'll get far! Captain is defending a tough case here: I think the "natural rate of unemployment" is something that appears retrospectively and changes based upon all sorts of factors economists can't predict. My point would simply be that there's nothing "natural" about unemployment if by that we mean losing your job.
Markets are harsh nasty places, which is why most of us like to see them heavily regulated and hemmed in.
'All Life is Problem Solving'
July 12, 2007 2:55 PM | Reply | Permalink
Of course an open economy model is going to do what you say, or anything else you want.
Meanwhile, in actual policy debates the reigning 'model' is IS/LM where foreigners are paralyzed. In other words, we forget about foreigners and lament that an increase in the budget deficit will push up interest rates.
Again, it's the distinction between the practical impact of simple models on one hand, v. the endless sophistication of more advanced theory.
July 12, 2007 3:22 PM | Reply | Permalink
Sounds like industrial policy to me.
July 12, 2007 3:25 PM | Reply | Permalink
I was talking about 'Radek,' not you, pacr.
July 12, 2007 3:27 PM | Reply | Permalink
It seems Mr Williamson accurately describes the scores of dog kibble marketers selling the same poison from a unique producer, the customers unawares.
Certainly Marx is so 19th century in the context of Silicon valley or a nation like Belgium, but not in a third world Nike factory. We must remember that the models of Keynes, Friedman, Galbraith and so on apply to a minority of the people on this planet, and not the hills surrounding Caracas and the many places around the world where first world economics usually spells disaster. It is also, therefore, in universality, that Marx is still more relevant in his descriptions, even nowadays.
July 12, 2007 3:30 PM | Reply | Permalink
O Captain, my Captain -- a changing natural rate is just unnatural. 1980 coming after the oil conniptions I think gives you too easy an argument. The better juncture would be Greenspan in the 90s. How much worse off would we have been if he had listened to Martin Feldstein and subscribed to a NUR?
July 12, 2007 3:31 PM | Reply | Permalink
Haven't any of the physics worshippers in this forum ever "lived by The Weather Channel?" Let the comparisons be fair, the quantum physicists with the micro-theory and the meteorologists with the macro.
July 12, 2007 3:35 PM | Reply | Permalink
I've had more than one English teacher flat out tell me that a reading interpretation was wrong. Yet, go to another English teacher and they will accept my conclusion. The weird thing, both of them can be right, even if their results are mutually exclusive.
Economics is the same. One economist sees a pattern of results and comes to one conclusion. Another economist looks at the exact same results and comes to a different conclusion. Again, both of them can be right, even if their results are mutually exclusive.
Contrast that to a physics. One physicist sees a set of results and comes to one conclusion. Take it to another physicist and they'll see the came conclusion. You'd have to take it to a thousand physicists before you get some disagreement. Even then, someone is wrong because there can only be one result that actually happened, not two.
Economics may be a step above the liberal arts in that they at least attempt to apply the scientific method. But, economics isn't much beyond an art.
July 12, 2007 4:50 PM | Reply | Permalink
Here's your rich content Max:
Want some berries with that?
July 12, 2007 7:33 PM | Reply | Permalink
|
http://tpmcafe.talkingpointsmemo.com/2007/07/11/the_ten_boxes_of_heterodoxy_or/
|
crawl-002
|
refinedweb
| 11,934
| 51.99
|
- Cycle detection
- This article is about iterated functions. For another use, see Cycle detection (graph theory).
In computer science, cycle detection is the algorithmic problem of finding a cycle in a sequence of iterated function values.
For any function ƒ that maps a finite set S to itself, and any initial value x0 in S, the sequence of iterated function values
must eventually use the same value twice: there must be some i ≠ j such that xi = xj. Once this happens, the sequence must continue by repeating the cycle of values from xi to xj−1. Cycle detection is the problem of finding i and j, given ƒ and x0.
Example
The figure shows a function ƒ that maps the set S = {0,1,2,3,4,5,6,7,8} to itself. If one starts from x0 = 2 and repeatedly applies ƒ, one sees the sequence of values
- 2, 0, 6, 3, 1, 6, 3, 1, 6, 3, 1, ....
The cycle to be detected is the repeating subsequence of values 6, 3, 1 in this sequence.
Definitions.
One can view the same problem graph-theoretically, by constructing a functional graph (that is, a directed graph in which each vertex has a single outgoing edge) the vertices of which are the elements of S and the edges of which map an element to the corresponding function value, as shown in the figure. The set of vertices reachable from any starting vertex x0 form a subgraph with a shape resembling the Greek letter rho (ρ): a path of length μ from x0 to a cycle of λ vertices.
Computer representation
Generally, ƒ will not be specified as a table of values, as we have given it in the figure above. Rather, we may be given access either to the sequence of values xi, or to a subroutine for calculating ƒ. The task is to find λ and μ while examining as few values from the sequence or performing as few subroutine calls as possible. Typically, also, the space complexity of an algorithm for the cycle detection problem is of importance: we wish to solve the problem while using an amount of memory significantly smaller than it would take to store the entire sequence.
In some applications, and in particular in Pollard's rho algorithm for integer factorization, the algorithm has much more limited access to S and to ƒ. In Pollard's rho algorithm, for instance, S is the set of integers modulo an unknown prime factor of the number to be factorized, so even the size of S is unknown to the algorithm. We may view a cycle detection algorithm for this application as having the following capabilities: it initially has in its memory an object representing a pointer to the starting value x0. At any step, it may perform one of three actions: it may copy any pointer it has to another object in memory, it may apply ƒ and replace any of its pointers by a pointer to the next object in the sequence, or it may apply a subroutine for determining whether two of its pointers represent equal values in the sequence. The equality test action may involve some nontrivial computation: in Pollard's rho algorithm, it is implemented by testing whether the difference between two stored values has a nontrivial gcd with the number to be factored. In this context, we will call an algorithm that only uses pointer copying, advancement within the sequence, and equality tests a pointer algorithm.
Algorithms
If the input is given as a subroutine for calculating ƒ, the cycle detection problem may be trivially solved using only λ+μ function applications, simply by computing the sequence of values xi and using a data structure such as a hash table to store these values and test whether each subsequent value has already been stored. However, the space complexity of this algorithm is λ+μ, unnecessarily large. Additionally, to implement this method as a pointer algorithm would require applying the equality test to each pair of values, resulting in quadratic time overall. Thus, research in this area has concentrated on two goals: using less space than this naive algorithm, and finding pointer algorithms that use fewer equality tests.
Tortoise and hare
Floyd's cycle-finding algorithm, also called the "tortoise and the hare" algorithm, is a pointer algorithm that uses only two pointers, which move through the sequence at different speeds. The algorithm is named for Robert W. Floyd, who invented it in the late 1960s.[1]
The key insight in the algorithm is that, for any integers i ≥ μ and k ≥ 0, xi = xi + kλ, where λ is the length of the loop to be found. In particular, whenever i = kλ ≥ μ, it follows that xi = x2i. Thus, the algorithm only needs to check for repeated values of this special form, one twice as far from the start of the sequence than the other, to find a period ν of a repetition that is a multiple of λ. Once ν is found, the algorithm retraces the sequence from its start to find the first repeated value xμ in the sequence, using the fact that λ divides ν and therefore that xμ = xν + μ. Finally, once the value of μ is known it is trivial to find the length λ of the shortest repeating cycle, by searching for the first position μ + λ for which xμ + λ = xμ.
The algorithm thus maintains two pointers into the given sequence, one (the tortoise) at xi, and the other (the hare) at x2i. At each step of the algorithm, it increases i by one, moving the tortoise one step forward and the hare two steps forward in the sequence, and then compares the sequence values at these two pointers. The smallest value of i > 0 for which the tortoise and hare point to equal values is the desired value ν.
The following Python code shows how this idea may be implemented as an algorithm.
def floyd(f, x0): # The main phase of the algorithm, finding a repetition x_mu = x_2mu # The hare moves twice as quickly as the tortoise tortoise = f(x0) # f(x0) is the element/node next to x0. hare = f(f(x0)) while tortoise != hare: tortoise = f(tortoise) hare = f(f(hare)) # at this point the start of the loop is equi-distant from current tortoise # position and x0, so hare moving in circle and tortoise (set to x0 ) # moving towards circle, will intersect at the beginning of the circle. # Find the position of the first repetition of length mu # The hare and tortoise move at the same speeds mu = 0 tortoise = x0 while tortoise != hare: tortoise = f(tortoise) hare = f(hare) mu += 1 # Find the length of the shortest cycle starting from x_mu # The hare moves while the tortoise stays still lam = 1 hare = f(tortoise) while tortoise != hare: hare = f(hare) lam += 1 return lam, mu
This code only accesses the sequence by storing and copying pointers, function evaluations, and equality tests; therefore, it qualifies as a pointer algorithm. The algorithm uses O(λ + μ) operations of these types, and O(1) storage space.
Brent's algorithm
Richard P. Brent described an alternative cycle detection algorithm that, like the tortoise and hare algorithm, requires only two pointers into the sequence.[2].
The following Python code shows how this technique works in more detail.
Like the tortoise and hare algorithm, this is a pointer algorithm that uses O(λ + μ) tests and function evaluations and O(1) storage space. It is not difficult to show that the number of function evaluations can never be higher than for Floyd's algorithm. Brent claims that, on average, his cycle finding algorithm runs around 36% more quickly than Floyd's and that it speeds up the Pollard rho algorithm by around 24%. He also performs an average case analysis for a randomized version of the algorithm in which the sequence of indices traced by the slower of the two pointers is not the powers of two themselves, but rather a randomized multiple of the powers of two. Although his main intended application was in integer factorization algorithms, Brent also discusses applications in testing pseudorandom number generators.
Time–space tradeoffs
A number of authors have studied techniques for cycle detection that use more memory than Floyd's and Brent's methods, but detect cycles more quickly. In general these methods store several previously-computed sequence values, and test whether each new value equals one of the previously-computed values. In order to do so quickly, they typically use a hash table or similar data structure for storing the previously-computed values, and therefore are not pointer algorithms: in particular, they usually cannot be applied to Pollard's rho algorithm. Where these methods differ is in how they determine which values to store. Following Nivasch,[3] we survey these techniques briefly.
- Brent[2] already describes variations of his technique in which the indices of saved sequence values are powers of a number R other than two. By choosing R to be a number close to one, and storing the sequence values at indices that are near a sequence of consecutive powers of R, a cycle detection algorithm can use a number of function evaluations that is within an arbitrarily small factor of the optimum λ+μ.[4][5]
- Sedgewick, Szymanski, and Yao[6] provide a method that uses M memory cells and requires in the worst case only (λ + μ)(1 + cM − 1 / 2) function evaluations, for some constant c, which they show to be optimal. The technique involves maintaining a numerical parameter d, storing in a table only those positions in the sequence that are multiples of d, and clearing the table and doubling d whenever too many values have been stored.
- Several authors have described distinguished point methods that store function values in a table based on a criterion involving the values, rather than (as in the method of Sedgewick et al.) based on their positions. For instance, values equal to zero modulo some value d might be stored.[7][8] More simply, Nivasch[3] credits D. P. Woodruff with the suggestion of storing a random sample of previously seen values, making an appropriate random choice at each step so that the sample remains random.
- Nivasch[3] describes an algorithm that does not use a fixed amount of memory, but for which the expected amount of memory used (under the assumption that the input function is random) is logarithmic in the sequence length. An item is stored in the memory table, with this technique, when no later item has a smaller value. As Nivasch shows, the items with this technique can be maintained using a stack data structure, and each successive sequence value need be compared only to the top of the stack. The algorithm terminates when the repeated sequence element with smallest value is found. Running the same algorithm with multiple stacks, using random permutations of the values to reorder the values within each stack, allows a time–space tradeoff similar to the previous algorithms. However, even the version of this algorithm with a single stack is not a pointer algorithm, due to the comparisons needed to determine which of two values is smaller.
Any cycle detection algorithm that stores at most M values from the input sequence must perform at least
function evaluations.[9][10]
Applications
Cycle detection has been used in many applications.
- Determining the cycle length of a pseudorandom number generator is one measure of its strength. This is the application cited by Knuth in describing Floyd's method. Brent[2] describes the results of testing a linear congruential generator in this fashion; its period turned out to be significantly smaller than advertised. For more complex generators, the sequence of values in which the cycle is to be found may not represent the output of the generator, but rather its internal state.
- Several number-theoretic algorithms are based on cycle detection, including Pollard's rho algorithm for integer factorization[11] and his related kangaroo algorithm for the discrete logarithm problem.[12]
- In cryptographic applications, the ability to find two distinct values xμ−-1 and xλ+μ−-1 mapped by some cryptographic function ƒ to the same value xμ may indicate a weakness in ƒ. For instance, Quisquater and Delescaille[8] apply cycle detection algorithms in the search for a message and a pair of Data Encryption Standard keys that map that message to the same encrypted value; Kaliski, Rivest, and Sherman[13] also use cycle detection algorithms to attack DES. The technique may also be used to find a collision in a cryptographic hash function.
- Cycle detection may be helpful as a way of discovering infinite loops in certain types of computer programs.[14]
- Periodic configurations in cellular automaton simulations may be found by applying cycle detection algorithms to the sequence of automaton states.[3]
- Shape analysis of linked list data structures is a technique for verifying the correctness of an algorithm using those structures. If a node in the list incorrectly points to an earlier node in the same list, the structure will form a cycle that can be detected by these algorithms.[15]
- Teske[5] describes applications in computational group theory: determining the structure of an Abelian group from a set of its generators. The cryptographic algorithms of Kaliski et al.[13] may also be viewed as attempting to infer the structure of an unknown group.
- Fich[9] briefly mentions an application to computer simulation of celestial mechanics, which she attributes to William Kahan. In this application, cycle detection in the phase space of an orbital system may be used to determine whether the system is periodic to within the accuracy of the simulation.
References
- ^ Floyd describes algorithms for listing all simple cycles in a directed graph in a 1967 paper: Floyd, R.W. (1967), "Non-deterministic Algorithms", J. ACM 14 (4): 636–644, doi:10.1145/321420.321422, . However this paper does not describe the cycle-finding problem in functional graphs that is the subject of this article. An early description of the tortoise and hare algorithm appears in Knuth, Donald E. (1969), The Art of Computer Programming, vol. II: Seminumerical Algorithms, Addison-Wesley , exercises 6 and 7, page 7. Knuth (p.4) credits Floyd for the algorithm, without citation.
- ^ a b c Brent, R. P. (1980), "An improved Monte Carlo factorization algorithm", BIT 20 (2): 176–184, doi:10.1007/BF01933190, .
- ^ a b c d Nivasch, Gabriel (2004), "Cycle detection using a stack", Information Processing Letters 90 (3): 135–140, doi:10.1016/j.ipl.2004.01.016 .
- ^ Schnorr, Claus P.; Lenstra, Hendrik W. (1984), "A Monte Carlo Factoring Algorithm With Linear Storage", Mathematics of Computation (American Mathematical Society) 43 (167): 289–311, doi:10.2307/2007414, JSTOR 2007414 .
- ^ a b Teske, Edlyn (1998), "A space-efficient algorithm for group structure computation", Mathematics of Computation 67 (224): 1637–1663, doi:10.1090/S0025-5718-98-00968-5 .
- ^ Sedgewick, Robert; Szymanski, Thomas G.; Yao, Andrew C.-C. (1982), "The complexity of finding cycles in periodic functions", SIAM Journal on Computing 11 (2): 376–390, doi:10.1137/0211030 .
- ^ van Oorschot, Paul C.; Wiener, Michael J. (1999), "Parallel collision search with cryptanalytic applications", Journal of Cryptology 12 (1): 1–28, doi:10.1007/PL00003816 .
- ^ a b Quisquater, J.-J.; Delescaille, J.-P., "How easy is collision search? Application to DES", Advances in Cryptology – EUROCRYPT '89, Workshop on the Theory and Application of Cryptographic Techniques, Lecture Notes in Computer Science, 434, Springer-Verlag, pp. 429–434, .
- ^ a b Fich, Faith Ellen (1981), "Lower bounds for the cycle detection problem", Proc. 13th ACM Symp. Theory of Computation, pp. 96–105, doi:10.1145/800076.802462 .
- ^ Allender, Eric W.; Klawe, Maria M. (1985), "Improved lower bounds for the cycle detection problem", Theoretical Computer Science 36 (2–3): 231–237, doi:10.1016/0304-3975(85)90044-1 .
- ^ Pollard, J. M. (1975), "A Monte Carlo method for factorization", BIT 15 (3): 331–334, doi:10.1007/BF01933667 .
- ^ Pollard, J. M. (1978), "Monte Carlo methods for index computation (mod p)", Math. Comp. (American Mathematical Society) 32 (143): 918–924, doi:10.2307/2006496, JSTOR 2006496 .
- ^ a b Kaliski, Burton S., Jr.; Rivest, Ronald L.; Sherman, Alan T. (1988), "Is the Data Encryption Standard a group? (Results of cycling experiments on DES)", Journal of Cryptology 1 (1): 3–36, doi:10.1007/BF00206323 .
- ^ Van Gelder, Allen (1987), "Efficient loop detection in Prolog using the tortoise-and-hare technique", Journal of Logic Programming 4 (1): 23–31, doi:10.1016/0743-1066(87)90020-3 .
- ^ Auguston, Mikhail; Hon, Miu Har (1997), "Assertions for Dynamic Shape Analysis of List Data Structures", AADEBUG '97, Proceedings of the Third International Workshop on Automatic Debugging, Linköping Electronic Articles in Computer and Information Science, Linköping University, pp. 37–42, .
External links
Categories:
- Gabriel Nivasch, The Cycle Detection Problem and the Stack Algorithm.
- Tortoise and Hare, Portland Pattern Repository
- Fixed points
- Combinatorial algorithms
Wikimedia Foundation. 2010.
Look at other dictionaries:
Cycle (graph theory) — In graph theory, the term cycle may refer to a closed path. If repeated vertices are allowed, it is more often called a closed walk. If the path is a simple path, with no repeated vertices or edges other than the starting and ending vertices, it… … Wikipedia
Cycle (mathematics) — This article is about group theory. For cycles in homological algebra, see Chain complex#Fundamental terminology. For cycles in graph theory, see Cycle (graph theory). In mathematics, and in particular in group theory, a cycle is a permutation of … Wikipedia
Détection du quorum — La détection du quorum (quorum sensing en anglais) est un ensemble de mécanismes régulateurs qui contrôlent l expression coordonnée de certains gènes bactériens au sein d une même population bactérienne. Sommaire 1 Histoire de la découverte de la … Wikipédia en Français
Detection dog — A detection dog getting ready to search a car for explosives … Wikipedia
Cycle — Cette page d’homonymie répertorie les différents sujets et articles partageant un même nom. Le mot cycle provient du grec «κυκλοσ» (kuklos)qui signifie roue ou cercle. Dans une acception primitive il désigne «un intervalle de temps qui correspond … Wikipédia en Français
Cycle d'hystérésis — Hystérésis L hystérésis est la propriété d un système qui tend à demeurer dans un certain état quand la cause extérieure qui a produit le changement d état a cessé. Cycle d hystérésis L exemple le plus trivial d hystérésis est celui d un système… … Wikipédia en Français
Cycle benzénique — Benzène Benzène Structure et représentations du benzène Général Nom IUPAC Benzène … Wikipédia en Français
Algorithme de Floyd de détection de cycle — Algorithme du lièvre et de la tortue Pour les articles homonymes, voir Le Lièvre et la tortue. L algorithme de Floyd de détection de cycle, encore connu sous le nom d algorithme du lièvre et de la tortue, est un algorithme pour détecter les… … Wikipédia en Français
Cardiac cycle — Cardiac events occurring in the cardiac cycle. Two complete cycles are illustrated. The cardiac cycle is a term referring to all or any of the events related to the flow or blood pressure that occurs from the beginning of one heartbeat to the… …
|
https://en.academic.ru/dic.nsf/enwiki/356646/Cycle_detection
|
CC-MAIN-2020-34
|
refinedweb
| 3,196
| 50.67
|
:
...
DEBUG on a C file from the SPEC benchmark suite, it gives a report that looks like this:
7646 bytecodewriter - Number of normal instructions
725 bytecodewriter - Number of oversized instructions
129996 bytecodewriter - Number of byte whereever you install it) to your path. Once in your system and path are set up, rerun the LLVM configure script and rebuild LLVM to enable this functionality.& std::cerr << *i << "\n";
However, this isn't really the best way to print out the contents of a BasicBlock! Since the ostream operators are overloaded for virtually anything you'll care about, you could have just invoked the print routine on the basic block itself: std:"; pseudocode,, since)) {
cerr << "F is used in instruction:\n";Ty);
will create an AllocaInst instance that represents the allocation of one integer in the current stack frame, at runtime. runtime.Ty, 0, "indexLoc");
where indexLoc is now the logical name of the instruction's execution value, which is a pointer to an integer on the runtime = .. ;
BasicBlock *BB = I->getParent();
BB->getInstList().erase(I); byte, int }"). Third, a concrete type is a type that is not an abstract type (e.g. "[ int, float }").
Because the most common question is "how do I build a recursive type with LLVM", we answer it now and explain it as we go. Here we include enough to cause this to be emitted to an output .ll file:
%mylist = type { %mylist*, int }*, int }".*, int}". The SymbolTable class, for example, needs move and potentially merge type planes in its representation when a pointer changes. an opaque objects somewhere) can never be refined.
This class provides a symbol table that the Function and Module classes use for naming definitions. The symbol table can provide a name for any Value or Type. SymbolTable is an abstract data type. It hides the data it contains and provides access to it through a controlled interface.
Note that the symbol table class is.
To use the SymbolTable well, you need to understand the structure of the information it holds. The class contains two std::map objects. The first, pmap, is a map of Type* to maps of name (std::string) to Value*. The second, tmap, is a map of names to Type*. Thus, Values are stored in two-dimensions and accessed by Type and name. Types, however, are stored in a single dimension and accessed only by name.
The interface of this class provides three basic types of operations:
The following functions describe three types of iterators you can obtain the beginning or end of the sequence for both const and non-const. It is important to keep track of the different kinds of iterators. There are three idioms worth pointing out:
Using the recommended iterator names and idioms will help you avoid making mistakes. Of particular note, make sure that whenever you use value_begin(SomeType) that you always compare the resulting iterator with value_end(SomeType) not value_end(SomeOtherType) or else you will loop infinitely.
The Core LLVM classes are the primary means of representing the program being inspected or transformed. The core LLVM classes are defined in header files in the include/llvm/ directory, and implemented in the lib/VMCore directory.
int:..
#include "llvm/GlobalValue.h"
doxygen info: GlobalValue Class
Superclasses:., User, Value
Global variables are represented with the (suprise suprise) GlobalVariable class. Like functions, GlobalVariables are also subclasses of GlobalValue, and as such are always referenced by their address (global values must live in memory, so their "name" refers to their.
Constant represents a base class for different types of constants. It is subclassed by ConstantBool, ConstantInt, ConstantSInt, ConstantUInt, ConstantArray etc for representing the various types of Constants.
Type as noted earlier is also a subclass of a Value class. Any primitive type (like int, short etc) in LLVM is an instance of Type Class. All other types are instances of subclasses of type like FunctionType, ArrayType etc. DerivedType is the interface for all such dervied types including FunctionType, ArrayType, PointerType, StructType. Types can have names. They can be recursive (StructType). There exists exactly one instance of any type structure at a time. This allows using pointer equality of Type *s for comparing types.
This subclass of Value defines the interface for incoming formal arguments to a function. A Function maintains a list of its formal arguments. An argument has a pointer to the parent Function.
|
http://llvm.org/releases/1.6/docs/ProgrammersManual.html
|
CC-MAIN-2014-42
|
refinedweb
| 727
| 55.95
|
This is the result of a few things I learned today with suggestions I received. Isn't so hard to read now and I lost one grep, at least.
#! /bin/sh while true; do eval $(awk '/^cpu /{print "previdle=" $5 "; prevtotal=" $2+$3+$4+$5 }' /proc/stat); sleep 0.4 eval $(awk '/^cpu /{print "idle=" $5 "; total=" $2+$3+$4+$5 }' /proc/stat); intervaltotal=$((total-${prevtotal:-0})) echo -n "{#cccccc}CPU:$((100*( (intervaltotal) - ($idle-${previdle:-0}) ) / (intervaltotal) ))%:" echo -n "$(awk '/MHz/ {printf "%.0f", $4}' /proc/cpuinfo | cut -c 1-4)Mhz " echo -n "RAM:$(free -m | grep -i /cache | awk '{print $3}')Mb " echo -n "T:$(($(cat /sys/bus/acpi/devices/LNXTHERM:00/thermal_zone/temp) / 1000))C " echo -n "BAT:$(cat /sys/class/power_supply/BAT0/status | cut -c 1-5):" echo -n "$((100*`cat /sys/class/power_supply/BAT0/charge_now` / `cat /sys/class/power_supply/BAT0/charge_full`))% " date +%m/%d sleep 5 done
I've learned more with my short dabbling with ttwm than I have for some time.
Time is a great teacher, but unfortunately it kills all its pupils ... - Louis Hector Berlioz
Offline
I don't believe bash is capable of the equivalent of "inlining" short one-line functions as in the example from the previous post. I suspect bash may even create subshells for each function call (it does create a new namespace). In the later case function calls for one line of code that is not reused anywhere else would be a significant resource drain on a script like this. In either case it would be some degree of drain.
I feel your philosophy when I read your code
It's somtimes a bit hard to understand but it works very fast.
The Performance Penalty for the shell functions is about 50% I guess. But you will hardly notice... Is it this discussion?
If your function doesn't take an argument, you could use
shopt -s expand_aliases
and aliases to simulate "inlining" of functions.
I have the feeling that it sounds very impolite when I write in english.
Offline
I have the feeling that it sounds very impolite when I write in english.
Not at all, I appreciate your points. I do like that page you linked, but I also like to disagree with it
There is a good element of personal taste in such issues though - I do find the "optimized" code easier to read myself. DWM is broken into many functions and I find that harder to read as to follow program logic I have to jump around between several different parts of the code. I (personally) prefer code that is as 'linear' as practical, because we read and understand it in a linear way.
Resist the GNU world order.
Offline
Hi Trilby
Now this is a style question. And it's funny, I tried to implement the data structures we talked about earlyer. Of corse I made countless tiny little functions
I could not yet try it because I'm stuck in the main loop and the status function. (too few functions
) But I have tested the data structures and the functions with something like unit tests.
What do you think about unit tests and decoupling? I realized a long time ago that I am not able to read a program linear. It's just too much. But decoupling helps. You can understand one aspect of the code and change it without understanding the rest.
With this trick I can scope with many functions. You probably know this. … and-tricks
Hey I dont't want to change you but this decoupling, code to understand/test, Martin Fowler thing was one of the very few moments at school when I thought this is not such a bad Idea.
Last edited by wirr (2012-11-23 14:57:33)
Offline
I think you give me too much credit; I have no formal education in computer science. I'm like the drunk blues guitarist on the street corner next to the symphony hall: when some of those fancy dressed "professionals" come out talking about some esoteric aspect of music theory, I just play louder.
That said I always like learning more. From what I gather about unit testing, I believe I do a bit of that, just in a much less formal way. I often break my code up into testable units. Sometimes this is within the same program, sometimes it is with verbose printf commands that allow me to see what is happening at run time, and sometimes I copy blocks of code to another program all together just to test and/or optimize.
But after testing is over, I remove all these superfluous bits of code. While it may or may not be a popular view, I firmly believe every data structure and every code instruction (i.e. the assembly or machine code instructions the C will compile into) should carry its weight. Any data element or instruction that is not doing something vital to the program's intended function should be removed.
----
I had to look up "decoupling" in this context. The idea sounds great, but the question is what is considered a unit or building block. I can split apart a window manager into many building blocks that can be tested independently, streamlined, and polished. But in the end, the desired functional unit is the whole window manager. I feel my job would not be complete if I just made many subunits that each worked wonderfully in isolation - Once they are put back together there is another round of polishing and streamlining that can be done.
I do think that breaking apart and putting back together any functional unit repeatedly will lead to great benefits. Interestingly this is true not only of programming: writing instructors encoruage the same thing. You might write a whole draft of a paper, then you split it into it's parts, and split each part into paragraphs, and each paragraph into sentences. You can evaluate each sentence in isolation, improve, reword, and polish it to perfection. But then you put it back together and each element then has to be adjusted again to work as best as possible in the new setting. The perfect introduction, the perfect body, and the perfect conclusion would make a horrible paper if they were just slammed together - they need to be woven together and each must adjust to the others.
So (my philosophy only), unit testing and decoupling are great tools to dissect your code and find better ways of doing things, but one should not forget that in the end the goal is to write one streamlined elegant program: one unit.
Resist the GNU world order.
Offline
No I just thought that someone with your programming skills and your opinion about the optimize later page has heard at least a little about this toppic. (Thats not an accusation that you should have.)
It's about not throwing away those printlines. But never mind I do it every day.
Think of it like you had another button next to the spell checker (aka compiler). After you defined the content of each paragraph you can press this button to ensure that every paragraph follows his definition (aka run the unit tests). This should give you big freedom to adjust them to the others.
Of course you should not compile the test routines in to the productive code.
But hey you are doing great with your philosophy. Lets say you are KISS
Your Pictures are great, are you a writer?
You say formal, are you a mathematician? You say education: They never looked at our Java code and the most important thing I learnd is to never ever use Open Source. I slaped them in the face. Then I was on the run when I heard your nifty little song. I sit down next to you to listen a bit. And since I am stoned I only recognise it now that I play this crazy funky formal rhythms.
Offline
[Off Topic]
Ha, thanks for the comments.
For my day job I do teach a university writing course for psychology students while I'm working on my dissertation in neuroscience and behavior. I'm not much of a neuroscientist though, and the type of behavior that I study has really lost momentum (and funding) in the American universities.
I have found, though, that I can combine my knowledge of behavioral research with my hobby of programming and build and program testing equipment for research laboratories. I'm doing this 'off the books' now and my equipment is being used in several labs here. I plan on making more of a business of it once I finish my dissertation work.
By using open source software, and some of the new toys in "open source hardware"[1], I can make flexible useful equipment for tens of dollars when the current comercially available counterparts cost hundreds of thousands of dollars.
[1]: I put open-source hardware in quotes as I'm skeptical as to how well the term can apply to hardware - but people have started using this term, and I like what they are producing.
[/Off Topic]
Last edited by Trilby (2012-11-26 16:04:00)
Resist the GNU world order.
Offline
Floating layer
I finally got so irritated with a couple transient windows being tiled that I implemented a floating layer just for these transient (eg dialog) windows. It turns out it was quite easy and did not add much code nor resource use. Further, it only took a couple minor additions to have the floating layer available for any window.
So now ttwm has a proper floating layer. Pull a window into floating by MOD+mouse drag to move (left button) or resize (right button). Push back into tiling by Mod+middle mouse button.
The biggest challenge was figuring out a focus model that does something reasonable with focus when the currently focused window is closed. Currently focus goes to the next window in the same stack if there is one (each desktop has a tiled stack and a floating stack), If there is no next window, focus goes to the top of the tiling stack, or if there are not tiled windows it goes to the top of the floating stack.
Improved external monitor support coming soon
The changes made to do this may actually make ttwm *lighter*. As is often the case the more general solutions are often more efficient. This change caused me to implement some more general solutions in the code. As a by product of these general solutions I've realized I can turn ttwm's currently very limited external monitor support into a full or proper support so the external monitor can have it's own tiled stack of windows and it's own floating stack.
I anticipate the changes to the external monitor support by year end. Then after some major code clean up to remove some kludge-buildup I may start preparations for a v2.0 tarball release with anticipated testing in the first quarter of the year with a ~Apr 2013 release of the 2.0 tarball if all goes well.
v2.0 coming ~Apr 2013
If anyone wants to prepare patches for a 2.0 version I'd encourage them to wait until the end of January as I anticipate major code changes until then, but hopefully from then to Apr the code changes should be limited to bug fixes. Also, if anyone (doomicide?*) has patches they'd like to be considered for incorporation into v2.0 please let me know.
Intended changes for 2.0 from 1.0:
- tabbar can be hidden/showed, or moved from top/bottom at run-time
- floating layer (not a separate mode, just a layer)
- full external/second monitor support, I also anticipate independent workspace switching on each monitor.
- some generalized solutions in code may streamline many parts of the code
- goal of keeping this all below 800 lines of code*
*lines-of-code is not a good metric for actual resource use - so in any case where more lines could be more efficient, I'll go with more lines. Just the same, I strive for keeping the code simple as well.
---------
FLOATING FYI: As there are no window borders, if you have terminals (or any windows with the same background color) floating over other terminals, it can be rather hard to tell where one window ends and another begins.
*PATCHES: I see banish and window rules are the current patches on doomicide's github. Banish will not be included but can be acheived with a binding to call iocane. Rules may be considered - I'm not keen on them myself, but if users use them they could be done.
ADDED BONUS: this floating layer allowed for not passing the focus to transient dialog windows. This solves the previous firefox download dialog issue once and for all.
Last edited by Trilby (2012-12-08 01:47:30)
Resist the GNU world order.
Offline
Just messing around with the changes this AM and I'm not getting the firefox download window working properly. See here:
Now I don't have OK or Cancel buttons and mod+Print doesn't work to refocus the window anymore.
Just had a chance to play around with this a little more and I don't appear to have any floating layer support. That little dialog box that pops up in the left upper corner of firefox for downloads can't be moved either. If you killclient firefox closes first and then the dialog box closes when you killclient again. I've tried a few other apps like terminals or gimp and I don't seem to have any floating layer.
Last edited by bgc1954 (2012-12-08 20:30:14)
Time is a great teacher, but unfortunately it kills all its pupils ... - Louis Hector Berlioz
Offline
Have you tried moving or resizing the window with MOD+mouse?
The firefox dialog, I've found, remembers it's previous size, so it starts up at whatever size it last recorded. I resized mine once, and from then on it showed up at the proper size.
Can you move/resize windows at all? This ability is not new - the only part that is new is that ttwm "remembers" where moved/resized windows have been placed when you switch between workspaces or open new windows. The MOD+mousebutton behavior has been around from the beginning.
Last edited by Trilby (2012-12-08 21:48:31)
Resist the GNU world order.
Offline
@Trilby - Yes, I have tried moving and resizing but it doesn't work for me. I tried installing an older version of ttwm--built 20121110--that I had in a backup directory and it seems I can't move with mod+mouse in that version either, although I'm sure I could at one point. I wouldn't likely notice that since I next to never move or resize windows.
I also get that little window in the upper left when I try to open a new folder in bookmarks, with firefox, but all I can do there is cancel since there are no other buttons and it won't take the focus to name the new folder. I can't resize the little boxes either.
Like I said in my last post, I tried out gimp as I thought that would be a handy app to be able to move around windows but I can't move or resize anything there either. I'm not sure what to try to troubleshoot this at this end?
Last edited by bgc1954 (2012-12-09 03:08:45)
Time is a great teacher, but unfortunately it kills all its pupils ... - Louis Hector Berlioz
Offline
Do you get any errors or messages on the stdout (tty1 if you use xinit/startx from there)?
I just pushed a "verbose" button press function to git. You could try the current git and make sure the stderr output (again to tty1) sends two messages with every mod+mousebutton press.
Resist the GNU world order.
Offline
Actually, I use xdm to login--remember the problems that caused?
I built new git version, then disabled the xdm.service and rebooted but tty1 just stops at "Reached graphical interface" and sits there. I can change to tty2 and startx but when I quit ttwm after trying mod+mouse buttons, I don't get any errors. I'm likely doing something wrong. I haven't used startx or xint since way before systemd times.
Last edited by bgc1954 (2012-12-09 05:01:11)
Time is a great teacher, but unfortunately it kills all its pupils ... - Louis Hector Berlioz
Offline
With X's new default configs you wouldn't actually get to see any of the output. You'd need to tell X to start on a separate tty.
If your xdm session is shut down you could go to tty2 and run `xinit -- vt3` which will launch X on tty3 while leaving the stderr output visible on tty2.
Alternately, even with a current xdm session (and x) running, you can start a second X session by going to any free tty and running `xinit -- :1 vtN` where N is yet another free virtual terminal.
There should also be a way to monitor the stderr from X when using xdm - it may create a log file.
Resist the GNU world order.
Offline
Well, that works, but strangely. I use xinit --vt3 from tty2 and X starts but on tty1--WTF! I check tty2 but no errors reported when trying out mod+mouse buttons. I'm starting to feel like a damned newbie--no offense, newbies.
edit: Just for s**** & giggles, I built the new version on my netbook and I can't move or resize anything there either. No errors using the above procedure. That's it, I'm going to bed...zzz
Last edited by bgc1954 (2012-12-09 05:48:52)
Time is a great teacher, but unfortunately it kills all its pupils ... - Louis Hector Berlioz
Offline
Are there no messages at all when you use Mod+mousebutton? If not then it's not even detecting the button press at all. (edit: I've now removed these verbose messages)
EDIT: I think I figured it out. I suspect you have a lock key (eg Numlock) on right? I got the keybindings working with lock keys a while back, but I never bothered with the mouse binding. I'll start working on that fix, but if you could confirm that that's the only issue that'd be great.
EDIT: That was an easy fix. If that was your issue the current git version should fix it.
Last edited by Trilby (2012-12-09 13:44:34)
Resist the GNU world order.
Offline
Eureka! That was it.
As soon as I turned off numlock everything functions as it should. Thx, I'm glad it didn't turn out to be a big deal. I've just got a couple of errands to run and then I'll build the new git version and see how it behaves with numlock. I'm sure it will all be fine. Thx again, Trilby.
I can now resize the dialog window from firefox but if I try to add a new folder to bookmarks, I can move and resize the dialog window but I can't rename the folder as the text box never gets highlighted to allow text entry. I don't know if the new git version will do anything for this but I'll let you know.
Edit: I'll have to investigate further with my netbook as it has no numlock key and move/resize wasn't working there last night. Oh bother.
Last edited by bgc1954 (2012-12-09 15:36:34)
Time is a great teacher, but unfortunately it kills all its pupils ... - Louis Hector Berlioz
Offline
Both numlock and capslock would have previously interfered with the mouse bindings - but they should both be fine now.
There are some minor quirks yet with setting input focus for dialog windows - this is probably behind the text box issue you describe. I'll try to replicate that and investigate further. EDIT: I found the same behavior you describe. Now time to investigate. EDIT2: that was a ridiculously easy fix. I had some reason for not allowing transient windows to get input focus ... but I don't remember what it was. I've now allowed them to get focus when clicked, they just should not get focus when originally mapped (displayed).
BTW, reading back through this thread, I saw lighthearted comment you made about raising various issues. I just want to say thanks for taking the time to report such issues. Other "use cases" for ttwm that highlight issues I don't get to see on a daily basis are the only way to keep improving it. I appreciate all the time you've taken to report these issues.
Last edited by Trilby (2012-12-09 16:50:07)
Resist the GNU world order.
Offline
Just built new git and I'm happy to report that everything is great on this end. And I was right, this new version is wonderful for working with gimp. Resize and move with all the gimp windows makes the whole apps usage much better. And I just love the middle mouse "undo/return" button. It's so much handier than having to resize your window back after you've resized it--if that makes any sense.
The netbook issue was just me being over-tired last night and totally forgetting the mod key with the touchpad... duhhh. The key combos there are quite awkward with the touchpad so, unless I plug in a mouse, I'll likely not do much resizing or moving on the netbook.
I had some reason for not allowing transient windows to get input focus ... but I don't remember what it was. I've now allowed them to get focus when clicked, they just should not get focus when originally mapped (displayed).
I think I remember an issue with firefox dialog not getting focus and you toyed with the idea, implemented a focus with mouse, and then went back to your original principle of having no mouse support in ttwm--so much for that
--and that's where we got the mod+Print to focus dialogs.
I've been happy to provide any and all help I can as ttwm has become my alltime favorite wm. Now, I can just continue experimenting with wm's but know that ttwm does everything I need it to do and has a developer that quickly irons out any of the quirks--or bugs--that I've ever come across. I appreciate your work on ttwm and all I've learned as a result of using it.
Hope I didn't swell your head so much that your "trilby" doesn't fit properly.
Last edited by bgc1954 (2012-12-09 17:32:34)
Time is a great teacher, but unfortunately it kills all its pupils ... - Louis Hector Berlioz
Offline
Just noticed a few glitches with the new mod+mouse focus/move--BTW mod+print doesn't seem to do anything anymore to refocus windows. I was moving some mp3 files from my PC to mp3 player using pcmanfm and I was getting a dialog box saying my player was full but I didn't know that until I focused the dialog with mod+mouse and it suddenly resized to fullscreen (monocle mode). Before I focused it, it was just an empty, small, grey box.
I also noticed another behavior with double-clicking on an mp3 file which pcmanfm opens with gnome-mplayer in my setup. Gnome-mplayer plays the file but the screen doesn't show what it's supposed to like a moving progress bar and displaying the name of the song until you mod+rt-mouse and actually move it. Just clicking on it with mod+rt-mouse didn't really do anything to focus it and it seemed to kind of freeze for a bit as I couldn't even move away and back with mod+arrow keys or mod+j/k.
Pcmanfm renaming a folder or file also brings up a dialog box but the text isn't highlighted blue until you click to focus and then the window goes fullscreen (monocle mode). Funny thing is that even though the text isn't highlighted blue before focus--it's grey--you can enter text to rename the folder or file and then click ok.
More grist for the mill.
Last edited by bgc1954 (2012-12-09 20:09:28)
Time is a great teacher, but unfortunately it kills all its pupils ... - Louis Hector Berlioz
Offline
Oye, lots of grist there - my hat will definitely fit now!
I'm installing pcmanfm as that last behavior seems like it should be easiest to try to replicate.
Mod+print shouldn't do much - it just calls the stack function again. If there is no currently focused window it will set focus to the first tiled window, or the first floating window if there are no tiled windows. This is a very 'dumb' function, it just ensures that focus is set to some window on the current workspace.
So far I have not been able to replicate the issue with renaming files in pcmanfm, but I'll keep digging. Do you initiate renaming by the right-click context menu? When I do, the dialog pops up with input focus (input box is highlighted) and I can type a new name. (edit: same functional results with the F2 shortcut).
Last edited by Trilby (2012-12-09 20:53:40)
Resist the GNU world order.
Offline
Yes, I use the right-click menu for renaming or deleting, etc. but the same behavior happens if I use F2. You're right, you get a box with the highlight in grey--normally pcmanfm gives you a blue highlighted box--but you can get the blue highlight by mod+lt-mouse click--I'm getting my right and left mixed up I think--and it looks as I expect. When this happens, if I'm in monocle mode, the dialog box resizes fullscreen and then the cancel button doesn't work unless you click on it and then away and back. The OK button works regardless. Maybe I'm just being picky as it works without touching it, just a different highlight color than I'm used to.
With firefox download dialog box, mine always comes up small and I have to resize with mod+rt-mouse click so the ok and cancel buttons appear. The resize doesn't seem to stick for each use.
I can see why you're not a big rodent fan as this new stuff seems to introduce more complication to the dialog boxes--but only if you mess with them like me.
Time is a great teacher, but unfortunately it kills all its pupils ... - Louis Hector Berlioz
Offline
I still can't replicate the pcmanfm behavior here - I'll try it on my work computer tomorrow.
I do get the same extra-small bookmarks dialog window in firefox. But this I can quite confidently blame of firefox. With these transient windows in the floating layer, ttwm never touches their geometry. That size is the size firefox sets.
The size "memory" I referred to earlier is not part of ttw, but I believe it is part of gtk-dialog windows. So if a gtk-file-dialog window is opened, resized, then closed, the next gtk-file-dialog window opened will open at the same size.
Firefox does not use gtk-dialogs as far as I can tell. I suspect they use their own XULRunner toolkit (though I know nothing about this except it's existence).
I supose I could check for WM size hints on transient windows and set the size manually. But that seems rather foolish. Whenever a program creates a window, it sets an initial size. Applications should allow window managers to resize windows, but they shouldn't depend on the WM to set the size. OK, firefox does depend on the WM to set the window size. Have I mentioned lately how much I dislike firefox? I have compensated for that and TTWM will check the size-hints of any transient window and set the size appropriately.
Last edited by Trilby (2012-12-10 00:17:37)
Resist the GNU world order.
Offline
Have I mentioned lately how much I dislike firefox? I have compensated for that and TTWM will check the size-hints of any transient window and set the size appropriately.
Not lately, but your resize hints work well. I, myself, am starting to become less fond of my old favorite as it seems to be crashing on me at inopportune, totally random times. I've been starting to use midori more and more but it has its issues as well.
On my end the ok button on the firefox download dialog is greyed out but still works fine when you click on it--just saying.
BTW, my netbook shows the same kind of behaviors as I've described on my desktop.
Last edited by bgc1954 (2012-12-10 00:54:23)
Time is a great teacher, but unfortunately it kills all its pupils ... - Louis Hector Berlioz
Offline
I was being mostly facetious. I actually think the gecko rendering engine is the best available - but in an effort to be so "cross platform" mozilla has had too cut far to many corners on the rest of the code. I'd rather they make different browsers for different platforms ... then they could really have a winner.
Anyhow, my tinkering is done for the night. I might stay up all night working on such things - but my wireless solar powered keyboard that boast never needing replacement batteries needs replacement batteries. My backup keyboard is missing one third of the keys, with another third broken thanks to a parrot that shares my home. As the escape and F1 keys are non-functional, using vim is completely out, and changing VTs to relaunch X is difficult. So no more coding tonight.
Last edited by Trilby (2012-12-10 01:09:43)
Resist the GNU world order.
Offline
|
https://bbs.archlinux.org/viewtopic.php?pid=1203534
|
CC-MAIN-2018-05
|
refinedweb
| 5,078
| 70.84
|
X++ and debugger features
This tutorial is for developers to use advanced constructs of the X++ language and take advantage of productive debugger features. This is a walkthrough of the new features with exercises included to practice using these features.
In previous versions, the X++ code was compiled into pseudo-code, or p-code, that was interpreted on the server or on the client application. This code was then subject to further compilation into CIL. Today the story is much simpler--only CIL is supported, and this code is generated from a new compiler. In this tutorial, we’ll be discussing some of the new features that have been added to the X++ language. As we run through the scenarios, you’ll also see some of the new features in the debugger.
Declare anywhere
Previously, all local variables had to be placed at the start of the method in which they’re used. Now, you have fine-grained control over the scope of your variables. With this new feature, it’s possible to provide smaller scopes for variables, outside of which the variables can’t be referenced. The lifetime of the variable is the scope in which it’s declared. Scopes can be started at the block level (inside compound statements), in for statements, and in using statements as we will see below. There are several advantages to making scopes small.
- Readability is enhanced.
- You can reduce the risk of reusing a variable inappropriately during long-term maintenance of the code.
- Refactoring becomes much easier. You can copy code in without having to worry about variables being reused in contexts they shouldn’t.
Example
In this example, we declare the loop counter inside the 'for' statement in which it's used.
void MyMethod() { for (int i = 0; i < 10; i++) { info(strfmt("i is %1", i)); } }
The scope of the variable is the for statement itself, including the condition expression and the loop update parts. The value can’t be used outside this scope. If you attempt to do that, you will get the following.
void MyMethod() { for (int i = 0; i < 10; i++) { if (i == 7) { break; } } info(strfmt("Found: %1", i)); }
The compiler will issue an error message in the info call: 'i' is not declared.
Example
There's another place where scopes can be established: the using statement, which is another newcomer to the X++ language.
static void AnotherMethod() { str textFromFile; using (System.IO.StreamReader sr = new System.IO.StreamReader("c:\\test.txt")) { textFromFile = sr.ReadToEnd(); } }
As a rule, when you use an IDisposable object, you should declare and instantiate it in a using statement. The using statement calls the Dispose method on the object in the correct way, even if an exception occurs while you are calling methods on the object. You can achieve the same result by putting the object inside a try block, and then explicitly calling Dispose in a finally block; in fact, this is how the using statement is translated by the compiler. Declarations can now be provided anywhere statements can be provided-- a declaration is syntactically a statement, a declaration statement. You can, therefore, provide declarations immediately prior to the usage. You don’t have to declare the variables all in one place.
Example
The following sample shows some of the features described above.
// loop variable declared within the loop: It will // not be misused outside the loop for(int i = 1; i < 10; i++) { // Because this value is not used from outside the loop, // its declaration belongs in this smaller scope. str s = int2str(i); info(s); }
To avoid confusion, the X++ compiler will issue an error if you attempt to introduce a variable that would hide another variable in an enclosing scope or even in the same scope. For instance, the following code will cause the compiler to issue the following diagnostic message: A local variable named 'i' cannot be declared in this scope because it would give a different meaning to 'i', which is already used in a parent or current scope to denote something else.
{ int i; { int i; } }
This aligns well with the rules that are known from C#, but is different from the rule in C++ where shadowing is not diagnosed.
Exercise
Adapt the code in FMVehicleInventoryServiceClass to use smaller scopes.
Static constructors and static fields
Static constructors and static fields are new features in the X++ language. Static constructors are guaranteed to run before any static or instance calls are made to the class. In C#, the concept of static relates to the whole executing application domain. The execution of the static constructor is relative to the user’s session. The static constructor has the following profile.
static void TypeNew()
You’ll never call the static constructor explicitly; the compiler will generate code to ensure that the constructor is called exactly once prior to any other method on the class. A static constructor is used to initialize any static data, or to perform a particular action that needs to be performed only once. No parameters can be provided for the static constructor, and it must be marked as static. Static fields are fields that are declared using the static keyword. Conceptually they apply to the class, not instances of the class.
Example
We'll show how a singleton, called instance in the example below, can be created by using the static constructor.
public class Singleton { private static Singleton instance; private void new() { } static void TypeNew() { instance = new Singleton(); } public static Singleton Instance() { return Singleton::instance; } }
The singleton will guarantee that only one instance of the class will ever be called, which is consumed by the following.
{ … Singleton i = Singleton::Instance(); }
Assignment of field members inline
It's now possible to assign a value to a field inline, i.e. along with the declaration of the field itself. This applies to both static and instance fields. In the following code, the values of field1 and field2 are defined in this fashion.
public class MyClass2 { int field1 = 1; str field2 = "Banana"; void new() { // … } }
The code above has the same semantic meaning as:
public class MyClass2 { int field1; str field2; void new() { this.field1 = 1; this.field2 = "Banana"; // … } }
The inline assignments work for both static and instance members.
Consts/Readonly
The concept of macros continues to be fully supported in X++. However, using constants instead of #defines has a number of benefits.
- You can add a documentation comment to the const, not to the value of the macro. Ultimately, the language service will pick this up and provide good information to the user.
- The const is known by IntelliSense.
- The const is cross referenced, so you can find all references of a particular constant. This is not the case for a macro.
- The const is subject to access modifiers, either private, protected, or public. The accessibility of macros is not well understood or even rigorously defined.
- Consts have scope, while macros do not.
- You can see the value of consts and readonly variables in the debugger.
Macros that are defined in class scopes (in class declarations) are effectively available in all methods of all derived classes. This was originally a bug in the legacy compiler macro implementation, but this loophole is now massively exploited by application programmers. The new X++ compiler still honors this, but no new code that uses this should be written. This particular feature also considerably impacts compiler performance. Constants can be declared at the class level as suggested below.
private const str MyConstant = 'SomeValue';
The constants can then be referenced by using the double-colon syntax.
str value = MyClass::MyConstant;
If you're in the scope of the class where the const is defined, you can omit the type name prefix (MyClass in the example above). You can easily implement the concept of a macro library this way. The list of macro symbols becomes a class with public const definitions.
Exercise
The fleet application contains the FMDataHelper class that contains the following macro definitions.
public class FMDataHelper { #define.FMSvcTechUserId('FMSvcTec') #define.FMClerkUserId('FMClerk') #define.FMManagerUserId('FMMgr') #define.FMSvcTechUserGrpId('FMSvcTech') #define.FMClerkUserGrpId('FMClerk') #define.FMManagerUserGrpId('FMManager') … }
Change these to const definitions and update the places where the macros are used accordingly.
You can also define consts solely as variables. The compiler will maintain the invariant that the value can't be modified.
{ const int Blue = 0x0000FF; const int Green = 0x00FF00; const int Red = 0xFF0000; }
Read-only fields can only be assigned a value once, and that value never changes; the field can be assigned its value either inline, at the place where the field is declared, or in the constructor. Currently, that's the only difference between const and read-only.
Var
You can now declare a variable without explicitly providing the type of the variable, if the compiler can determine the type from the initialization expression. Note that the variable is still strongly-typed into one, unambiguous type. It's only possible to use var on declarations where an initialization expressions are provided (from which the compiler will infer the type). There are situations where this can make code easier to read, but this feature shouldn't be misused. You should consider the following rules:
- Use var to declare local variables when the type of the variable is obvious from the right side of the assignment, or when the precise type is not important.
// isn't apparent from the initialization expression.
// When the type of a variable is not clear from the context, use an // explicit type. int var4 = myObject.ResultSoFar();
Use var for the declarations of for loop counters.
Use var for disposable objects inside using statements.
Private and protected member variables
Previously, all member variables defined in a class were invariably protected. It's now possible to make the visibility of member variables explicit by adding the private, protected, and public keywords. The interpretation of these modifiers is obvious and aligns with the semantics for methods:
- A private member can only be used within the class where it's defined.
- a protected member can be used in the class where it's defined, and all subclasses thereof.
- A public member can be used anywhere: it's visible outside the confines of the class hierarchy in which it's defined.
The default for member variables that aren't adorned with an explicit modifier is still protected. You should make it a habit of explicitly specifying the visibility. As described, when a member variable is defined as public, it may be consumed outside of the class in which it's defined. In this case, a qualifier designating the object hosting the variable has to be specified, using the dot notation (as is the case for method calls). Reusing the code from above:
public class MyClass2 { int field1; str field2; void new() { this.field1 = 1; // Explicit object designated field2 = "Banana"; // 'this' assumed, as usual } }
In this case, field1 is accessed using the explicit 'this.' qualifier. Note: Making a member variable public may not be a good idea since it exposes the internal workings of the class to its consumers, creating a strong dependency between the class implementation and its consumers. You should always strive to only depend on a contract, not an implementation.
Finally in try/catch statements
Try/catch statements can now include an optional finally clause. The semantics are the same as they are in C# and other managed languages. The statements in the finally clause are executed when control leaves the try block, either normally or through an exception.
try { // ... } catch { // Executes when any exception is thrown in the dynamic // scope in the try block. } finally { // Executed irrespective of how the try block exits. }
Event handlers and Pre/Post methods
In legacy X++, it was possible to prescribe in metadata that certain methods were to be executed prior to and after the execution of a method. The information about what subscribes call was recorded on the publisher, which isn't useful in the environment. It's now possible to provide Pre and Post handlers through code, by providing the SubscribesTo attribute on the subscribers.
Example
[PreHandlerFor(classStr(MyClass2), methodstr(MyClass2, publisher))] public static void PreHandler(XppPrePostArgs arguments) { int arg = arguments.getArg("i"); } [PostHandlerFor(classStr(MyClass2), methodstr(MyClass2, publisher))] public static void PostHandler(XppPrePostArgs arguments) { int arg = arguments.getArg("i"); int retvalFromMethod = arguments.getReturnValue(); } public int Publisher(int i) { return 1; }
This example shows a publishing method called Publisher. Two subscribers are enlisted with the PreHandlerFor and PostHandlerFor. The code shows how to access the variables, and the return values. Note: This feature is provided for backward compatibility and, because the application code doesn't have many delegates, to publish important application events. Pre and Post handlers can easily break as the result of added or removed parameters, changed parameter types, or because methods are no longer called, or called under different circumstances. Attributes are also used for binding event handlers to delegates:
[SubscribesTo( classstr(FMRentalCheckoutProcessor), delegatestr(FMRentalCheckoutProcessor, RentalTransactionAboutTobeFinalizedEvent))] public static void RentalFinalizedEventHandler( FMRental rentalrecord, Struct rentalConfirmation) { } delegate void RentalTransactionAboutTobeFinalizedEvent( FMRental fmrentalrecord, struct RentalConfirmation) { }
In this case, the SubscribesTo attribute specifies that the method RentalFinalizedEventHandler should be called when the FmRentalCheckoutProcessor.RentalTransactionAboutToBeFinalizedEvent delegate is called. Since the binding between the publisher and subscribers is done through attributes, there's no way of specifying the sequence in which subscribers are called..
It's perfectly valid to have private or protected static methods in an extension class. These are typically used for implementation details and are not exposed as extensions. The example below illustrates; } }
Why use extension methods?
The extension method technique doesn't affect the source code of the class it extends. Therefore, the addition to the class can be done without over-layering. Upgrades to the target class are never affected by any existing extension methods. However, if an upgrade to the target class adds a method that has the same name as your extension method, your extension method becomes unreachable through objects of the target class. Extension methods are easy to use. The extension method technique uses the same dot-delimited syntax that you routinely use the call regular instance methods. Extension methods can access all public artifacts of the target class, but they can't access things that are protected or private. In this way, extension methods can be seen as a kind of syntactic sugar.
Where can extension methods be applied?
The target of an extension method must be one of the following application object types:
- Class
- Table
- View
- Map
Regardless of the target type, an extension class is used to add extension methods to the type. For example, an extension table is not used to add methods to a table, and there's no such thing as an extension table.
Using clauses
Previously,"); } }
Differences between legacy X++ and new X++
In this section, we'll see some of the differences between legacy X++ and the new X++.
Reals are Decimals
The type used to represent real values has changed from interpreted X++. This won't require you to rewrite any code, because the new type can express all of the values that the old one could. We provide this material in the interest of full disclosure only. All instances of the real type are now implemented as instances of the .NET decimal type (System.Decimal). Just as the real type in previous versions, the decimal type in a binary coded decimal type that, unlike floating point type, is resilient to rounding errors. The range and resolution of the decimal type are different from the original types. The original X++ real type supported 16 digits and an exponent that defined where the decimal point is placed. The new X++ real type can represent decimal numbers ranging from positive 79,228,162,514,264,337,593,543,950,335 (2⁹⁶-1) to negative 79,228,162,514,264,337,593,543,950,335 (-(2⁹⁶-1)). The new real type doesn't eliminate the need for rounding. For example, the following code produces a result of 0.9999999999999999999999999999 instead of 1. This is readily seen when using the debugger to show the value of the variables below.
public class MyClass2 { public static void Main(Args a) { real dividend = 1.0; real divisor = 3.0; str stringvalue; System.Decimal valueAsDecimal; real value = dividend/divisor * divisor; valueAsDecimal = value; info(valueAsDecimal.ToString("G28")); } }.
value = round(value, 2);
Internal representation real represents ((-2⁹⁶ to 2⁹⁶)/10(0\ to\ 28)), where -(2⁹⁶-1) is equal to the minimum value and 2⁹⁶-1 is equal to the maximum value that can be expressed.
String truncation
String truncation is not a new feature. However, when code is executed in IL in previous versions, the automatic string truncation described here doesn’t take place. String values can be declared in X++ to contain a maximum number of characters. Typically, this is achieved by encoding this information in an extended data type, as shown below: Credit card numbers cannot exceed 20 characters.
It's also possible to specify length constraints directly in the X++ syntax:
str 20 creditCardNumber;
All assignments to these values are implicitly truncated to this maximum length.
Exercise
Run the following code in the debugger by including it in a static main method:
creditCardNumber = "12345678901234567890Excess string";
Casting
The previous version of X++ was very permissive in its treatment of type casting. Both up-casting and down-casting were allowed without intervention from the programmer. Some of the casting permitted in legacy X++ can’t be implemented in the confines of the .NET runtime environment. In object oriented programming languages, including X++, casting refers to assignments between variables whose declared types are both in the same inheritance chain. A cast is either a down-cast or an up-cast. To set the stage for this discussion, we introduce a few self-explanatory class hierarchies:
As you can see, the MotorVehicle class isn't related to the Animal cast. An up-cast happens when assigning an expression of a derived type to a base type:
Animal a = new Horse();
A down-cast happens when assigning an expression of a base type to a derived variable.
Horse h = new Animal();
Both up-casts and down-casts are supported in X++. However, down-casts are dangerous and should be avoided whenever possible. The example above will fail with an InvalidCastException at runtime, since the assignment doesn't make sense. X++ supports late binding on a handful of types, like object and formrun. This means that the compiler won't diagnose any errors at compile-time when it sees a method being called on those types, if that method isn't declared explicitly on the type,. It's assumed that the developer knows what they're doing. For instance, the following code may be found in a form.
Object o = element.args().caller(); o.MyMethod(3.14, “Banana”);
The compiler can't check the parameters, return values, etc. for the MyMethod method, since this method isn't declared on the object class. At runtime, the call will be made using reflection, which is orders of magnitude slower than normal calls. note that calls to methods that are actually defined on the late binding types will be naturally checked. For example, the call to ToString():
o.ToString(45);
will cause a compilation error:
'Object.toString' expects 0 argument(s), but 1 specified.
because the ToString method is defined on the object class. There's one difference from the implementation of previous version of X++, related to the fact that methods could be called on unrelated objects, as long as the name of the method was correct, even if the parameter profiles weren't entirely correct. This isn't supported in CIL.
Example
public class MyClass2 { public static void Main(Args a) { Object obj = new Car(); Horse horse = obj; // exception now thrown horse.run(); // Used to call car.run()! } }
You should use the IS and AS operators liberally in your code. The IS operator can be used if the expression provided is of a particular type (including derived types); the AS operator will perform casting into the given type and return null if a cast isn't possible.
Compiler diagnoses attempts to store objects in containers
In previous incarnations of the X++ compiler, it was possible to store object references into containers, even though this would fail at runtime. This is no longer possible. When the compiler sees an attempt to store an object reference into a container:
container c = [new Query()];
It will issue the error message:
Instances of type 'Query' cannot be added to a container.
If the type of the element that is added to the container is any type that the compiler can't make the determination of whether the value is a reference type. The compiler will allow this under the assumption that the user knows what they're doing. The compiler won't diagnose the following code as erroneous:
anytype a = new Query(); container c = [a];
but an error will be thrown at runtime.
Cross company clause can contain arbitrary expressions
The cross company clause can be used on select statements to indicate the companies that the search statement should take into account. The syntax hasn't been enhanced to allow arbitrary expressions (of type container) instead of a single identifier, which is a variable of type container.
private void SampleMethod() { MyTable t; container mycompanies = ['dat', 'dmo']; select crosscompany: mycompanies t; }
Now, it's possible to provide the expression without having to use a variable for this purpose.
private void SampleMethod() { MyTable t; container mycompanies = ['dat', 'dmo']; select crosscompany: (['dat'] + ['dmo']) t; }
mkDate predefined function no longer accepts shorthand values
In legacy systems, it was possible to use "shorthand" values for the year argument of the mkDate function. The effect can be seen in the following code sample.
static void Job16(Args _args) { int y; date d; for (y = 0; y < 150; y++) { d = mkDate(1,1,y); info(strFmt("%1 - %2", y, year(d))); } }
Running this code in the legacy system will produce the following values: 0 – 2000 1 – 2001 2 – 2002 … 27 – 2027 28 – 2028 29 – 2029 30 – 2030 31 – 1931 32 – 1932 33 – 1933 … 97 – 1997 98 – 1998 99 – 1999 100 – 1900 We no longer support these values. Attempts to use such values will cause the mkDate function to return the null date (1/1/1900).
Obsolete statement types
The following statements are no longer supported.
Pause and Window statements
The X++ pause statement is no longer supported because the pop-up Print Window that it affected has been removed. the pause and window statement were mainly used for debugging within the MorphX development environment, which was the same as the execution environment. Since the two are now separated, with Visual Studio taking the place of the MorphX environment, these statements are no longer relevant.
Print statement
The X++ print statement is another statement that existed only for debugging purposes. It still exists, and its basic idea is unchanged. But print now outputs through System.Diagnostics.WriteLine. The product configuration determines the detail of the written information is sent.
You may find that using the Infolog is more compelling, since its output appears in the debugger and the running form.
The Ignore list
Since the legacy environment was all interpreted, it was possible to have some parts not compile, and use the rest. As long as you only called methods that compiled correctly, you were fine; however, you would run into trouble if you tried to call methods that weren't successfully compiled. This way of working doesn't work in CIL. Assemblies are generated from successful compilations and the runtime system can't load incomplete assembles. However, there are legitimate scenarios when porting legacy applications into the new environment where it's beneficial to get things running in a staged fashion and where parts of the application need to be tested before everything is ported. While this is useful for this very limited scenario, it shouldn't be used once the application is ready for production, since you would be hiding problems that will occur at runtime, after the system has been deployed. This is how it currently works: You can specify a method in an XML by selecting, "Edit Best Practice Suppressions," from the context menu on the project. This will open an XML document where the exclusions are maintained.
New Debugger features
This section provides information about the new features that we've added to the debugging experience in Visual Studio.
Adding ToString methods to your classes
It's often a benefit to add ToString() methods to your classes. The effort spent doing this comes back many times and it's easy to do. This advice also holds true for legacy X++. Note: Since ToString methods can be called at unpredictable times, it isn't a good idea to change the state of the object here.
Identifying unselected fields
It's a common source of bugs to use fields from a table when these fields don't appear in the field list in a select statement. Such fields will have a default value according to their type. It's now possible in the debugger to see if a value has been selected or not.
Exercise
Consider the following code:
class MyClass { public static void Main(Args a) { FMRental rental; select EndMileage, RentalId from rental; rental.Comments = "Something"; } }
Set a breakpoint on the assignment statement. Make your class the startup object in your project, and start by pressing F5. When the breakpoint is encountered, view the rental variable by expanding it in the locals window. You'll see something similar to the following graphic.
You can see that the fields that have been selected (EndMileage and RentalId) appear with their selected values, while the unselected fields appear as null. This signifies their value wasn't fetched from the database. Obviously, this is a debugging artifact. The values of the unselected fields will be the default value for the type of the field. Step over this and notice how the debugger changes the rendering to the actual value. Note: If the table is set to Cache, the system will always fetch all fields from the entire table, irrespective of the field list provided in the code.
The Auto and Infolog Windows
The debugger will allow you to easily access certain parts of the state of the application easily. This information is available in the autos window, where the current company, the partition, the transaction level, and the current user id are listed.
There is also a window showing the data that is written to the Infolog.
New breakpoint features
The Visual Studio debugger supports conditional breakpoints and breakpoints that are triggered by hit count. You can also have the system perform specific actions for you as you hit the breakpoint. None of these features were available in the legacy debugger. These are explained below:
- Hit count enables you to determine how many times the breakpoint is hit before the debugger breaks execution. By default, the debugger breaks execution every time that the breakpoint is hit. You can set a hit count to tell the debugger to break every 2 times the breakpoint is hit, or every 10 times, or every 512 times, or any other number you choose. Hit counts can be useful because some bugs don't appear the first time your program executes a loop, calls a function, or accesses a variable. Sometimes, the bug might not appear until the 100th or the 1000th iteration. To debug such a problem, you can set a breakpoint with a hit count of 100 or 1000.
- Condition is an expression that determines whether the breakpoint is hit or skipped. When the debugger reaches the breakpoint, it'll evaluate the condition. The breakpoint will be hit only if the condition is satisfied. You can use a condition with a location breakpoint to stop at a specified location only when a certain condition is true. For example, suppose you trace points.
Exercise
Consider the following code:
class PVsClass { public static void Main(Args a) { int i; for (i = 0; i < 10; i++) { print i; } } }
Put a breakpoint on the print statements by pressing F9 while that statement is selected. This will create a normal, unconditional breakpoint. Now, use the mouse to open the context menu for the breakpoint and select Condition. Put in a condition that indicates that the breakpoint should happen when the value of the 'i' variable exceeds 5. Set the class as a startup project, and the class containing the code as the startup item in the project. Run the code. Feel free to modify the value of 'i' using the debugger. Now, remove this breakpoint, and use the Hit count feature to accomplish the same thing. Note: A breakpoint can have several conditions. It's often helpful to hover the cursor over the breakpoint, causing an informative tooltip to appear. Trace points are often useful tot race execution. Insert a trace point on the line in question and log the value of the variable. The trace output will appear in the output window in the debugger.
The immediate window
The immediate window is a useful feature in the VS debugger that allows the user to enter expression and statements to evaluate at any given time. This feature isn't currently implemented in the X++ stack, as is the case for many other languages, notably F#. However, that doesn't mean that the savvy user can't benefit from the immediate window. It just means that snippets must be expressed in C#, not in X++. There's a separate document that describes the details of how this can be done to great effect.
|
https://docs.microsoft.com/bg-bg/dynamics365/unified-operations/dev-itpro/dev-tools/new-x-debugger-features
|
CC-MAIN-2018-34
|
refinedweb
| 4,949
| 54.22
|
: (The Person, Sruden, Employee, Faculty, and Staff classes) Design a class named Person and its two subclasses named Student and Employee. Make Faculty and Staff subclasses of Employee. A person has a name, address, phone number, and email address. A student has a class status (freshman, sophomore, junior, or senior). Define the status as a constant. An employee has an office, salary, date hired. Define a class named MyDate that contains the fields year, month, and day. A faculty member has office hours and a rank. A staff member has a title. Overridde the toString method in each class to display the class name and the person's name.
Draw the UML diagram for classes. Implement the classes. Write a tes program that creates a Person, Student, Employee, Faculty, and Staff, and invokes their toString() methods.
Here is What I have so far: (I have 6 classes + 1 test file so 7 different files)
public class person { private String name, address, phone, email; public person(){ } public person(String name, String address, String phone, String email){ this.name = name; this.address = address; this.phone = phone; this.email = email; } public String getName(){ return name; } public void setName(String name){ this.name = name; } public String getAddress(){ return address; } public void setAddress(String address){ this.address = address; } public String getPhone(){ return phone; } public void setPhone(String phone){ this.phone = phone; } public String getEmail(){ return phone; } public void setEmail(String email){ this.email = email; } @Override public String toString(){ return getClass().getName() + "\n" + name; } }
public class student extends person{ private final String CLASS_STATUS; public student(String; } }
Ok, here is where I have the problem, I cannot run my test program because I have errors which is (cannot find symbol) I am not sure what I am doing wrong and how to fix it, I am not sure if it is a problem with the classes or with the test program. (And mention if you see any other problems thanks)"); } }
Edited by agmolina90: n/a
|
https://www.daniweb.com/programming/software-development/threads/382042/help-programming-in-java-with-super-class
|
CC-MAIN-2018-39
|
refinedweb
| 328
| 66.74
|
Hi all,
Could you please let us know the best way to do the below validation in xslt
we are reading card number using an xpath expression
<xsl:value of
from input. It is a string of 19 characters. we would like to include a check which will see if the first 15 characters are digits and the next 4 charcters are spaces. What is the best way in xsl to perform this check.
"123456789123456 " is valid
Anything other than this format, should not be accepted. that is which are not, first 15 characters are digits and next 4 characters are spaces.can we use any script within xsl or regex. Please advise.If regex or scripts are to be used, let us know what namespaces to declare as well.
|
https://www.daniweb.com/programming/software-development/threads/349228/validation-of-a-sequence-in-xsl
|
CC-MAIN-2017-09
|
refinedweb
| 130
| 80.72
|
Hi all
I am using visual studio 2008 and dot net 3.5. I am trying to encrypt a string, but when I use the SHA1 it works fine, but when I try the same with SHA256,SHA384,SHA512 . I am getting the following error
the type or namespace name "SHA512" could not be found (are you missing a using directive or an assembly refrence ?)
and similarly for the other SHA256 and SHA384. And when I type the SHA in the visual studio I am only getting the following things
- SHA1
- SHA1CryptoServiceProvider
- SHA1Managed
and it doesn't list any of the other SHA256,SHA384,and SHA512. What am I missing here ?
P.S.
I am using using System.Security.Cryptography;
Thanks,
Varun Krishna. P
Add your 2¢
|
http://channel9.msdn.com/Forums/TechOff/SHA512-in-visual-studio-2008-and-net-35
|
CC-MAIN-2015-18
|
refinedweb
| 127
| 75.1
|
Download source code for Working with WebBrowser in WPF.
It is very easy to use WebBrowser control in your WPF application. In your XAML you may include
<WebBrowser x:
</WebBrowser>
This will create a WebBrowser Control within your WPF window. Now to load a document, either you have to navigate to a site or directly load the document from your application.
Lets for instance,
wbMain.NavigateToString("<HTML><H2><B>This page comes using String</B><P></P></H2>");
Here, NavigateToString will load the string data into the WebBrowser. So basically you need to pass an html body directly to the WebBrowser control using this method as string.
On the other hand, if you like to do the same thing using Stream, you might go for NavigateToStream which takes a Stream as method argument and loads it.
Uri uri = new Uri(@"pack://application:,,,/mypage.htm");
Stream source = Application.GetContentStream(uri).Stream;
wbMain.NavigateToStream(source);
For instance, you can see I have been using the relative url of the package resource which is loaded into the stream and loaded to the browser using NavigateToStream method. You can also go for normal urls to load this rather than using the Content url using this.
Similar to this, you can also load data from a file located in your hard drive. To do this, you might either read the entire file into a string/stream, using the method I have provided earlier, or you can load the entire data in your webbrowser control. You can also use the general method Navigate to navigate your page to any location directly using either unc path or normal web address.
So say I write :
wbMain.Navigate(new Uri("", UriKind.RelativeOrAbsolute));
It will load the my entire website directly within your WebBrowser control inside the WPF window. Similarly you can also use
wbMain.Navigate(new Uri("c:/xyz/TestFile.htm"));
You can see this will load the file into the WebBrowser control.
Regarding other normal methods you might use
if (wbMain.CanGoBack)
{
wbMain.GoBack();
}
if (wbMain.CanGoForward)
{
wbMain.GoForward();
}
to back and forth between the pages. It is always better to use CanGoBack and CanGoForward to check whether the navigation is possible or not before using GoBack or GoForward.
Theme a WebBrowser means actually theme the web document. Web document can be themed only using html. You need to know basic html styles to theme a document inside the WebBrowser. For instance you might use CSS to change the color of the ScrollBar that suits your Application.
<style type=\"text/css\"> body { scrollbar-base-color:black; }</style>
This will create a black styled browser window just like shown above with black scrollbars. For further knowledge, you can read on CSS and htmls.
This is not the end of this. There are lots of things that you can do using the WPF web browser control. For instance, the most general issue that everybody face while loading an html file with full javascript control is the trust panel. According to the settings that you might have implied to your internet explorer browser, it might not trust a disk html file directly in your web browser.
Say for instance, I have a file which shows a javascript alert when the page is loaded.
<script type="text/javascript">
function getAlert(){
alert("Hi the page is loaded!!!");
}
window.onload = getAlert;
</script>
So you can see, when the window is loaded the page will display an alert message. Now if I load the page using
wbMain.Navigate(new Uri("pack://siteoforigin:,,,/myalertpage.htm", UriKind.RelativeOrAbsolute));
Or any path local to the system, you will end up the security warning like “To help protect your security, your web browser has restricted this file from showing active content that would access your computer…” like as shown in the picture :
This is a general problem to everybody while using WPF WebBrowser. To overcome this, you need to either load the html as content stream or write
<!-- saved from url=(0014)about:internet -->\n\r
as your first line of document. This will instruct that the page is loaded from about:internet and will not display the Security warning. (0014) indicates how much to read as url. As about:internet uses 14 characters, we need to use 14 in braces.
You can also use
<!-- saved from url=(0019) -->\n\r
if you want to. The url takes 19 letters, so I specified it as 19.
Accessing javascript from the webbrowser or invoking a .NET object can be done very easily using WPF WebBrowser control.
Communication between html document and WPF requires you to have full trust between the applications. In javascript, window.external points to the external application, which you might use to invoke a method outside the WebBrowser.
To do this you need to create an interface between the two. A helper method should be used to which could be accessed directly using Javascript. Let us look how we can achieve this using WPF WebBrowser Control.
[PermissionSet(SecurityAction.Demand, Name = "FullTrust")]
[ComVisible(true)]
public class ObjectForScriptingHelper
{
Mainwindow mExternalWPF;
public ObjectForScriptingHelper(Window1 w)
{
this.mExternalWPF = w;
}
}
So basically the class allows you to invoke a .NET method directly from javascript. This helper class is set Permission to FullTrust and also ComVisible. So our WebBrowser, which is actually a Com element can directly communicate with the class to invoke method within the class ObjectForScriptingHelper, which is the parent window on which the browser is loaded. The javascript will allow to use window.external to point to this class.
);
}
}
Say I have a method InvokeMeFromJavascript within the class ObjectForScriptingHelper class. To use this class you need to create an object of it and pass it to the property ObjectForScripting of WebBrowser control.
So I write,
ObjectForScriptingHelper helper = new ObjectForScriptingHelper(this);
this.wbMain.ObjectForScripting = helper;
Now Lets navigate to an html with :
<input type="text" id="txtMessage" />
<input type="button" value="InvokeMe" onclick="javascript:window.external.InvokeMeFromJavascript(document.getElementById('txtMessage').value);" />
This will load a textbox and a button. See in the code above, I have used window.external to call the same function that I have declared in the ObjectForScriptHelper class. Thus when you click on the Button inside the WebBrowser, you will see the message been displayed in the TextBlock outside it.
In the above image, when the user clicks on InvokeMe inside the WebBrowser, it will update the TextBlock placed outside.
Now it is time to do the reverse. Lets suppose you want to invoke a javascript method from C#. This is also can easily be done using InvokeScript method. The InvokeScript method of WebBrowser control allows you to pass data from the external WPF application to the document loaded in the WebBrowser.
Let us take a look at how you can do this.
function WriteFromExternal(message){
document.write("Message : " + message);
}
Inside the html document, I wrote a simple javascript method named WriteFromExternal which takes a string argument. Now to invoke this method, you need to use
this.wbMain.InvokeScript("WriteFromExternal", new object[] { this.txtMessageFromWPF.Text });
Therefore the text written in the TextBox outside the WPF WebBrowser control gets passed to the javascript and hence the document gets refreshed with the Message.
You can see I have clicked the Button CallDocument which invokes the javascript method inside the Document with the message “Pass to JS” and which in turn writes the entire string within the document.
So it is very easy for a programmer to work with WPF WebBrowser control as it is very flexible according to our requirement. I hope you like this article very much. Any feedback is welcome. Thank you.
Latest Articles
Latest Articles from Abhi2434
Login to post response
|
https://www.dotnetfunda.com/articles/show/840/working-with-webbrowser-in-wpf
|
CC-MAIN-2020-34
|
refinedweb
| 1,275
| 57.47
|
Closed Bug 232545 Opened 16 years ago Closed 16 years ago
Text included by <marquee></marquee> is not shown
Categories
(Core :: XBL, defect, major)
Tracking
()
People
(Reporter: KKuhlemann, Assigned: neil)
Details
(Keywords: regression)
Attachments
(2 files)
User-Agent: Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7a) Gecko/20040128 Firebird/0.8.0+ On websites using scrolling text by the <marquee>-tag, thw text inside of <marquee></marquee> is not shown. This is only in 1.7a (firebird) about since 2004-01-15 nightly. In Mozilla 1.6 works the scrolltext, even in firebird pre0.8 nighlies. Even old Netscape 4.x showed this text without scrolling, latest firebird shows nothing at all. Reproducible: Always Steps to Reproduce: 1.Open a website with a <marquee>-scrolltext. 2.The text inside the <marquee></marquee> tags is not shown. 3. Actual Results: Firebird failed to show the marquee-scrolling and don't show the text at all. Expected Results: Performing the marquee-scrolling as Mozilla 1.6 does.
Agreed. There are other bugs that stop marquees being rendered in both Seamonkey and Firebird (e.g. bug 208683) in certain situations, but this one seems to be Firebird-specific (I'm using 20040120 Firebird/0.7+). I'll attach a very simple text case to demonstrate.
Open the attachment in both Firebird and Seamonkey and compare. The marquee fails to function in recent builds of the former.
Bug also present in Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7a) Gecko/20040118 ->Browser - Layout: View Rendering
Component: General → Layout: View Rendering
Product: Firebird → Browser
Version: unspecified → Trunk
confirmed also on 2004013105/Mac. regression. marking New.
Assignee: blake → roc
Status: UNCONFIRMED → NEW
Ever confirmed: true
Keywords: regression
OS: Windows XP → All
QA Contact: ian
Hardware: PC → All
Please don't confirm bugs that are pretty clearly in the wrong component.... Just opening the JS console would have told you this is NOT a layout error. This works in build 2004-01-10-09 and is broken in build 2004-01-11-09. Backing out the XBL sink changes from bug 229526 fixes this problem.
Assignee: roc → hyatt
Component: Layout: View Rendering → XBL
The problem, of course, is that we are looking at ocument.getAnonymousNodes(this)[0].firstChild.firstChild where the markup is: 283 <content> 284 <html:div xbl: 285 <xul:hbox 286 <html:div> 287 <children/> 288 </html:div> 289 </xul:hbox> 290 </html:div> 291 </content> The [0] thing is the outermost <html:div> since whitespace inside <content> _is_ stripped. Its firstChild, however, is no longer the <xul:hbox> but the textnode that we are no longer stripping from it. And the textnode's firstChild is undefined, of course (instead of being the inner <html:div>). Fixing this up is easy-ish, but I wonder how much other XBL that does similar things got broken by the change in bug 229526...
Assignee: hyatt → neil.parkwaycc.co.uk
Severity: normal → major
Flags: blocking1.7a?
Comment on attachment 140495 [details] [diff] [review] Proposed patch Are such changes announced anywhere before they land?
Attachment #140495 - Flags: review?(doronr) → review+
Comment on attachment 140495 [details] [diff] [review] Proposed patch sr=bzbarsky, and please try to lxr for patterns that may have been broken by the other patch, ok?
Attachment #140495 - Flags: superreview+
Fix checked in. A brief scan of LXR shows 5 classes of xml files that use the html namespace. 1) tests (don't use xbl) 2) obsolete (don't use xbl) 3) pretty print (does not appear to be affected) 4) textbox multiline="true" (already fixed) 5) this bug So, hopefully, there isn't anything left to fix.
So can this be marked fixed?
Flags: blocking1.7a?
Whoops, did I forget to select the knob :-(
Status: NEW → RESOLVED
Closed: 16 years ago
Resolution: --- → FIXED
The notorious marquees work fine, once again :) 20040204 Firebird/0.8.0+
|
https://bugzilla.mozilla.org/show_bug.cgi?id=232545
|
CC-MAIN-2019-51
|
refinedweb
| 653
| 66.74
|
table of contents
NAME¶
CURLOPT_PROXY_SSLCERT - set SSL proxy client certificate
SYNOPSIS¶
#include <curl/curl.h>
CURLcode curl_easy_setopt(CURL *handle, CURLOPT_PROXY_SSLCERT, char *cert);
DESCRIPTION¶
This option is for connecting to an HTTPS proxy, not an HTTPS server.
Pass a pointer to a null-terminated string as parameter. The string should be the file name of your client certificate used to connect to the HTTPS proxy. The default format is "P12" on Secure Transport and "PEM" on other engines, and can be changed with CURLOPT_PROXY.
When using a client certificate, you most likely also need to provide a private key with CURLOPT_PROXY_SSLKEY(3).
The application does not have to keep the string around after setting this option.
DEFAULT¶
NULL
PROTOCOLS¶
Used with HTTPS proxy
Added in 7.52.0
RETURN VALUE¶
Returns CURLE_OK if TLS enabled, CURLE_UNKNOWN_OPTION if not, or CURLE_OUT_OF_MEMORY if there was insufficient heap space.
SEE ALSO¶
CURLOPT_PROXY_SSLCERTTYPE(3), CURLOPT_PROXY_SSLKEY(3), CURLOPT_SSLCERT(3),
|
https://manpages.debian.org/bullseye/libcurl4-doc/CURLOPT_PROXY_SSLCERT.3.en.html
|
CC-MAIN-2021-43
|
refinedweb
| 153
| 56.96
|
CFD Online Discussion Forums
(
)
-
OpenFOAM Installation
(
)
- -
Compiling with Intel compiler icc90
(
)
hjasak
August 24, 2005 20:31
Dear All, I have just succe
Dear All,
I have just successfully compiled and run FOAM-1.2 with the latest Intel compiler, version 9.0. There is a few minor porting fixes which will be provided, but if somebody wants to have a go, please let me know.
Some performance tests will follow, but I'm not too interested - this was mainly to see what the new compiler has to say about the code, including porting issues and adherance to standard. I think we pass with flying colours.
Hrv
henry
August 25, 2005 03:28
The reason you did not experie
The reason you did not experience much difficult porting OpenCFD's latest release of OpenFOAM with the latest Intel compiler is because I had already done it. I have also run speed tests and found little benefit from using the Intel compiler, in fact for some codes and cases the gcc compiled version runs faster but the difference is not large.
hjasak
August 25, 2005 06:28
Not quite - there were still s
Not quite - there were still some errors in template handling for fvm discretisation so the released code could not have compiled with Intel 9.0.
henry
August 25, 2005 06:38
I didn't compile just before r
I didn't compile just before release, it was about a couple of weeks ago and I have made quite a few developments in that period. However because the Intel compiler did not show any significant advantages over gcc and has some disadvantages I didn't consider it a good use of my time on OpenFOM to ensure the release version compiles with this compiler. I will check the next release as well but that won't be for a while so you will have to make any changes you need for this compiler yourself if it is important for you.
hjasak
August 25, 2005 06:47
Thanks - actually, the latest
Thanks - actually, the latest Intel compiler has only become available on 15/Aug.
With your permission, and in the community spirit, I would like to pass the porting changes, as well as a number of other bug fixes and extensive new developments in topological mesh handling and finite area method to OpenCFD for inclusion in the next release as we have done in the past - that way everybody can do a part of the work and we all benefit from it.
Hrv
henry
August 25, 2005 06:54
We are currently not in a posi
We are currently not in a position to support and maintain developments of OpenFOAM made by people outside OpenCFD, except those which are part of support or consultancy contracts, due to lack of finance.
michele
August 25, 2005 09:38
Trying to build OpenFOAM 1.2 w
Trying to build OpenFOAM 1.2 with pathscale I encountered similar compiler errors in the fvm discretisation part of the code. I would also be interested in getting these fixes.
hjasak
August 25, 2005 10:07
E-mailed. The errors are to do with the fact that the fvc namespace has got a static function called flux, so flux<type>::type is mis-interpreted. Doing Foam::flux to pick up the flux class from vector.H (the right one) fixes the issue.
Hrv
hjasak
August 26, 2005 08:19
Hello Michelle, It looks to
Hello Michelle,
It looks to me the cause of the problem is the:
interpolations/surfaceInterpolation/schemes/downwind/downwind.H:99:
error: parse
error before `;' token
I have just checked my version of downwind.H and there is no semicolon on line 99. All the other problems seem to follow from this.
Would you be so kind as to E-mail me your copy of the following file:
interpolations/surfaceInterpolation/schemes/downwind/downwind.H
just to make sure it's not something obvious; otherwise, it seems to me that the parser in the compiler you are using may be broken (not sure).
Hrv
hjasak
August 28, 2005 23:50
Just a short update on Intel 9
Just a short update on Intel 9.0. After the compiler pass (with optimization flags on), there is a second pass at link-time in which single-file and multi-file optimization is done. This is taking forever (about 2 hours to link the main FOAM library) and the compiler issues a bunch of interesting vectorisation messages.
Some vectorisation happens in the solvers as well and I am consistently getting 20-25% speedup over gcc-4.0.1. However, there seems to be an issue with the gzip library - looks like the compiler has optimized it a bit too agressively.
Overall, 25% is pretty good for speed-up but the link stage is so criminally slow that I cannot use it in everyday development. :-(
Hrv
rbw
September 16, 2005 14:40
An update to Michele's post ab
An update to Michele's post above.
I was also getting the same compilation error, it seems that the problem is with gcc versions earlier than 4.
I had the same error with gcc-3.3.6 on gentoo linux and gcc-3.4.3 on caos linux.
If it hurts when you do something, don't do it...
So, only compile the source code with gcc >= 4. I don't know if this is explicitly said anywhere else, so I think it bears registering here.
jkr
September 27, 2005 13:07
Too bad the error in downwind.
Too bad the error in downwind.C still appears with Etch's gcc-4.0 (Debian testing)...
$ gcc -v
gcc version 4.0.1 (Debian 4.0.1-2)
Is there a workaround, or is the error harmless (even if downwind does not build)?
jkr
September 27, 2005 13:45
Ok I've been mistaken in my pr
Ok I've been mistaken in my previous post: manually compiling downwind.C does not lead any error now, my apologies.
I've several versions of gcc on the machine (mix stable and testing) and the compiling scripts seem to have little trouble in using gcc-4.0. I'll check the scripts for that.
jasonb
December 1, 2005 10:25
Hi, I'm trying to use the p
Hi,
I'm trying to use the pathscale-2.3 compilers to build OpenFOAM on an Opteron platform, but am having the same problems as listed previously in this discussion some time ago, namely with downwind.H line 99 and a phantom semicolon. Was anybody ever able to figure out if this was a problem with the parser of the compiler, or something else entirely? I didn't have this trouble using gcc-4.0.1.
Thanks
Jason
hjasak
December 1, 2005 10:31
I am pretty certain pathscale
I am pretty certain pathscale is broken. The failure is in the front end by the looks of things and it is not trivial to get it to work...
I have provided a fix for the particular problem in downwind (somewhere at the forum, it needs scoping on names) but that's not the end of the matter.
Sorry,
Hrv
michele
December 1, 2005 10:50
Just for confirmation, I repor
Just for confirmation, I reported the bug (dtd. 31 aug 2005) to the pathscale developers. Here's their reply:
---
"Hi Michele,
Thanks for reporting the compilation issue.
This is to inform you that we could reproduce the problem in house.
A bug report (#8102) has been opened. We will notify you of any action
taken.
Note that you are correct the Pathscale compiler uses gcc 3.3 front-end
and it is on our roadmap to switch to a more recent version of gcc
(probably gcc 4).
Regards,
Didier."
---
Regards
Michele
jens_klostermann
March 23, 2006 09:08
Hi Hrv, I also try to compi
Hi Hrv,
I also try to compile and run FOAM-1.2 with the latest Intel compiler, version 9.0. I experience the same problems as you did. However I don't know how to handle the fix you sugested on Thursday, August 25, 2005 - 08:07 am in this thread:
Doing Foam::flux to pick up the flux class from vector.H (the right one) fixes the issue.
Can you please illustrate that a bit more?
Thank you!
hjasak
March 23, 2006 11:02
I cannot remeber exactly (it's
I cannot remeber exactly (it's been a while), but from the records it seems that the failure is in fvcDdt.[HC]. My fixes looked something line this - look for the Foam::flux<type>::type bit and do the same where required:
template<class>
tmp<GeometricField
<typename>::type, fvPatchField, surfaceMesh> >
ddtPhiCorr
(
const volScalarField& rA,
const volScalarField& rho,
const GeometricField<type,>& U,
const GeometricField
<
typename Foam::flux<type>::type,
fvPatchField,
surfaceMesh
>& phi
)
... blah blah
Hope this helps - if you get stuck, I can give you a copy of my development version for comparison.
Hrv
jens_klostermann
March 24, 2006 02:46
Problem solved. Thank you Hrv.
Problem solved. Thank you Hrv.
The error was in fvcDdt.[HC]. In these files
flux<type>::type
has to be replaced by
Foam::flux<type>::type
.
Be careful it is
...
T
ype>::type
instead of
...
t
ype>::type
!
nishant_hull
October 27, 2007 11:35
I am also not been able to rec
I am also not been able to recompile the foam to run it in debug mode. I do not know what exactly is the problem. can any body tell me, is this because of gcc compiler or what?? i have two gcc compiler on my linux system. one default and one with the package. precompiled programs on my ststem is running quite ok. But so far I could not be able to run any own programm.
The error after recompiling using ./Allawake is :-
/home/343880/OpenFOAM/linux/paraview-2.4.4/include/vtkDataObject.h: In member function 'virtual void vtkDataObject::ReleaseDataFlagOn()':
::ReleaseDataFlagOff()':
::RequestExactExtentOn()':
/home/343880/OpenFOAM/linux/paraview-2.4.4/include/vtkDataObject.h:273: warning: use of old-style cast
/home/343880/OpenFOAM/linux/paraview-2.4.4/include/vtkDataObject.h: In member function 'virtual void vtkDataObject::RequestExactExtentOff()':
/home/343880/OpenFOAM/linux/paraview-2.4.4/include/vtkDataObject.h:273: warning: use of old-style cast
`/home/343880/OpenFOAM/OpenFOAM-1.4.1/lib/linuxGccDPDebug/libvtkFoam.so' is up to date.
make[3]: Leaving directory `/home/343880/OpenFOAM/OpenFOAM-1.4.1/applications/utilities/postProcessing/grap hics/PVFoamReader/vtkFoam'
+ cd PVFoamReader
+ mkdir -p Make/linuxGccDPDebug
+ cd Make/linuxGccDPDebug
+ cmake ../..
./Allwmake: line 6: cmake: command not found
+ make
make[3]: Entering directory `/home/343880/OpenFOAM/OpenFOAM-1.4.1/applications/utilities/postProcessing/grap hics/PVFoamReader/PVFoamReader/Make/linuxGccDPDebug'
make[3]: *** No targets specified and no makefile found. Stop.
make[3]: Leaving directory `/home/343880/OpenFOAM/OpenFOAM-1.4.1/applications/utilities/postProcessing/grap hics/PVFoamReader/PVFoamReader/Make/linuxGccDPDebug'
make[2]: *** [PVFoamReader] Error 2
make[2]: Leaving directory `/home/343880/OpenFOAM/OpenFOAM-1.4.1/applications/utilities/postProcessing/grap hics'
make[1]: *** [graphics] Error 2
make[1]: Leaving directory `/home/343880/OpenFOAM/OpenFOAM-1.4.1/applications/utilities/postProcessing'
make: *** [postProcessing] Error 2
+ '[' 0 = 1 -a '' = doc ']'
All times are GMT -4. The time now is
20:44
.
|
http://www.cfd-online.com/Forums/openfoam-installation/57477-compiling-intel-compiler-icc90-print.html
|
CC-MAIN-2016-50
|
refinedweb
| 1,883
| 56.15
|
How to validate and sanitize user input in JavaScript
What’s the best way to validate user inputs in JavaScript or Node.JS?
There’s a couple validator modules that I like to use
There are a lot of input validation libraries out there. But to me, the 2 listed above provide the easiest API.
If I’m looking for a sanitizing library, I’ll go with DOMPurify library or validator.js.
Now, don’t mix sanitizing with validating.
const isNotSame = validating !== sanitizing; // true
They’re not the same at all.
What is input validation
Input validation is like running tests about the data the user is filling out in a form.
If they’re is an email field, you want to make sure that it’s not empty, and that it follows a specific email format pattern.
If the form has a name field, you want to make sure that it’s not empty, it’s a string, and that it meets a minimum length requirement.
These tests are helpful to let the user know that the data they’ve entered is correct or incorrect.
If it’s incorrect you can send them a message to correct it.
Validating user input values can happen on the client side for a better user experience, and it should happen in the back-end side as well.
People can bypass the client-side code and send wrongly formatted data to the back-end. So validate in the back-end code as well.
What is output sanitizing
Sanitizing data is best done before displaying the data to the user screen.
It cleanses the original data to prevent it from exploiting any security holes in your application.
I don’t recommend doing input sanitizing because you may risk altering the data in ways that make it unusable.
Validating input: Okay solution
validator.js is a great and easy input validation module to use.
It can be used in Node.js and client-side JavaScript.
import validator from 'validator'; // 62.8k size const data = { name: 'Ruben', about: 'I like long walks in the beach.', email: 'test@mail.com', }; // Check if name or about field is empty if (validator.isEmpty(data.name) || validator.isEmpty(data.about)) { console.log('Name or about field is empty') } // Check if email format is correct if (!validator.isEmail(data.email)) { console.log('Email format is incorrect') }
The caveat with this library is that it only validates strings.
If you don’t know your input is going to be a string value than I’d suggest to convert it into a string with a template literal such as backtick quotes.
const age = 20; console.log(typeof `${age}`); // string
If you don’t support ES6 features, than try this method:
const age = 20; console.log(age + ''); // '20'
This module is 62.8K and can be reduced to 16.2k if it’s gzipped.
You can also just import modules you need instead of the whole library.
import isEmpty from 'validator/lib/isEmpty'; // 2.2k import isEmail from 'validator/lib/isEmail'; // 6.9k
In my opinion, I believe validator.js is really nice, but there is better.
Validating input: Best solution
Yup is a light weight JavaScript schema builder. Yup also parses data, and validates it.
Building a schema is extremely simple. Let’s build a schema for the data object that was used above.
First we’re going to create a variable called
schema and begin defining the keys of our object.
const schema = yup.object().shape();
Now I will pass my object inside the
yup.object().shape() function.
const schema = yup.object() .shape({ name: yup.string().required().min(1, 'Please enter a name'), about: yup.string() .min(10, 'Enter more than 9 characters') .max(160, 'Cannot be more than 160 characters'), email: yup.string().email('Please enter a valid email address'), });
What I like about Yup is that you can enter custom messages for every test.
Here’s how to test your Yup schema:
const data = { name: 'Ruben', about: 'I like long walks in the beach.', email: 'test@mail.com', }; schema.validate(data) .then(data => console.log(data)) .catch(err => console.log(err));
It is an asynchronous process, but they have function utilities to make synchronous. I’d recommend doing asynchronous as much as possible.
Output sanitizing: Okay solution
Most template languages (Pug, JSX, etc) have output sanitizing built in by default.
But if you don’t you can use validator.js to clean up a dirty string.
const dirty = `I love to do evil <img src=" onload="alert('you got hacked');" />`; const clean = validator.escape(dirty);
validator.escape() will convert HTML tags into HTML entities.
Output sanitizing: Better solution
I like to use
DOMPurify as my main option to sanitize data. I believe it does a much better job.
If you grab the same variable,
dirty, and cleanse it with
DOMPurify the output would look like
import DOMPurify from 'dompurify'; const clean = DOMPurify.sanitize(dirty); // I love to do evil <img src="
It leaves the the HTML element,
img, but it removes any funky HTML attributes.
Let’s dirty up the string a bit more and see what happens.
const dirty = `I love to do evil <img src=" onload="alert('you got hacked');" /> <script>alert("YOU GOT HACKED!");`; const clean = DOMPurify.sanitize(dirty); // I love to do evil <img src="
DOMPurify removes any script HTML elements and its content.
If you must do input sanitizing
Again, sanitizing really depends on the context of the data. There are cases where sanitizing input is a must.
To sanitize the users input data you can still use validator.js as I demonstrated above.
Validator.js is supported with both client-side and back-end code.
If you want to make DOMPurify work with Node.js, you’ll have to install an extra NPM module to make it work.
You can check if DOMPurify is supported in your environment by running this
if conditional.
if (DOMPurify.isSupported) { // ...DO some stuff }
Here’s the full code to making DOMPurify work in Node.js. Make sure to install jsdom.
npm i -S jsdom dompurify
import createDOMPurify from 'dompurify'; import { JSDOM } from 'jsdom'; import createDOMPurify from 'dompurify'; import { JSDOM } from 'jsdom'; const dirty = `I love to do evil <img src=" onload="alert('you got hacked');" /> <script>alert('you got hacked!')</script>` const windowEmulator = new JSDOM('').window; const DOMPurify = createDOMPurify(windowEmulator); if (DOMPurify.isSupported) { const clean = DOMPurify.sanitize(dirty) console.log(clean); // I love to do evil <img src=" }
I like to tweet about JavaScript and post helpful code snippets. Follow me there if you would like some too!
|
https://linguinecode.com/post/validate-sanitize-user-input-javascript
|
CC-MAIN-2022-21
|
refinedweb
| 1,100
| 59.5
|
There's no better way to explain some code-related issue than providing a test for them, and that is what any NH team member is going to ask you no matter how clearly you described it; that said.. why not being smart enough to provide it since the beginning?
For those who doesn't know what a unit test is, or why could possibly be useful, a unit test is nothing more than a method with some code in it to test if a feature works as expected or, in your case, to reproduce a bug. What makes them so useful is the ability to automatically execute them all; if you hypothetically had a set of test for every feature you coded into the software you're designing, after every change you could test if everything is still working or something got broken. If that triggered your attention, you can read further information on Unit Tests and Test Driven Development here and here, while here you can download and get some info on NUnit, which is the testing framework NHibernate team is currently using; obviously you can google around a bit for more info on this topic, as I'm going to focus on how testing applies to NHibernate bug fixing process.
Ok, back on topic then. If you dared to download NHibernate sources from SourceForge, or perhaps the trunk itself using a SVN client, you'd find a well established test project with some base classes you should use to implement the test. BTW, for the sake of simplicity, I created a C# project extracting only the few classes you need to build a test, so you don't need to use the actual NH sources anymore. You can download it here.
The project has the following structure, which is very similar to the one you'd find in the official sources:
I've mantained classes and namespaces naming in order to let your test compile in the actual NH test project without (hopefully) changing anything on it. Next steps will be
Please note that all the code should be located in a NHSpecificTest folder's subfolder named like the Jira entry you submitted (for example, NH1234). So, once you've created the issue, you should do a little refactoring work to modify your test's folder and namespaces.
It would be hard to test an ORM without a domain model, so a simple one is mandatory, along with its mappings. My advice here is to keep things as simple as possible: your main aim should be trying to isolate the bug without introducing unnecessary complexity.
For example, if you find out that NHibernate isn't working fine retrieving a byte[] property when using a Sql Server 2005 RDBMS (it isn't true, NHibernate can deal quite well with such kind of data), you should create a domain entity not so different from the following:
1: namespace NHibernate.Test.NHSpecificTest.NH1234
2: {
3: public class DomainClass
4: {
5: private byte[] byteData;
6: private int id;
7:
8: public int Id
9: {
10: get { return id; }
11: set { id = value; }
12: }
13:
14: public byte[] ByteData
15: {
16: get { return byteData; }
17: set { byteData = value; }
18: }
19: }
20: }
Mappings of such a simple domain model should be quite easy and small sized; the standard way to proceed is creating a single mapping file, named Mappings.hbm.xml, containing mapping definitions of all your domain model.
1: <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="NHibernate.Test"
2: namespace="NHibernate.Test.NHSpecificTest.NH1234" default-access="field.camelcase"
3:
4: <class name="DomainClass">
5: <id name="Id">
6: <generator class="assigned" />
7: </id>
8: <property name="ByteData" />
9: </class>
10: </hibernate-mapping>
As we're going to see shortly, test base class, in his default behavior, looks for such a file in test assembly's resources and automatically adds it to NHibernate configuration.
A test class is nothing more than a class decorated with the TestFixture attribute and which inherits (in our test environment) from BugTestCase base class. If you remember, we are going to test a fake bug about NHibernate incorrectly retrieving a byte[] property from the database. That means that first of all we're going to need a database with a suitable schema and then an entity already stored in it.
The default connection string points to a localhost server containing a test database called NHibernate; if your environment fits well with that, you have nothing to change in the app.config file. The test base class takes care of creating all the tables, keys and constraints that your test needs to play fine, taking into account what you wrote on the mapping file(s).
What about the entity we should already have into a table to test the retrieving process? The right way to inject custom code before the execution of a test is overriding the OnSetup method:
1: protected override void OnSetUp()
3: base.OnSetUp();
4: using (ISession session = this.OpenSession())
5: {
6: DomainClass entity = new DomainClass();
7: entity.Id = 1;
8: entity.ByteData = new byte[] {1, 2, 3};
9: session.Save(entity);
10: session.Flush();
11: }
12: }
That code is invoked once per test, just before its execution; the infrastructure provides a OnTearDown virtual method as well, useful to wipe away your data from the tables and present a clean environment to the next test being executed:
1: protected override void OnTearDown()
3: base.OnTearDown();
6: string hql = "from System.Object";
7: session.Delete(hql);
8: session.Flush();
9: }
10: }
The test class is almost setted up, we only need one last element: our imaginary bug happens only when dealing with a Sql Server 2005 database, things seem to be fine with other RDBMS. That means that the corresponding test only makes sense when that dialect has been selected, otherwise it should be ignored; another virtual method, called AppliesTo, serves for this purpose and can be overridden to specify a particular dialect for which the test makes sense:
1: protected override bool AppliesTo(NHibernate.Dialect.Dialect dialect)
3: return dialect as MsSql2005Dialect != null;
4: }
The test method is were you'll describe how NHibernate should behave although it actually doesn't. It's a traditional C# method, that usually ends with one or more Asserts, by which you verify whether things went as you expected. Our "fake" bug was "In Sql Server 2005, NHibernate can't correctly load a byte array property of an entity"; a good test for that could be something like
1: [Test]
2: public void BytePropertyShouldBeRetrievedCorrectly()
3: {
6: DomainClass entity = session.Get<DomainClass>(1);
8: Assert.AreEqual(3, entity.ByteData.Length);
9: Assert.AreEqual(1, entity.ByteData[0]);
10: Assert.AreEqual(2, entity.ByteData[1]);
11: Assert.AreEqual(3, entity.ByteData[2]);
12: }
13: }
Your test class may contain as many test methods as you need to better show the cases in which you experienced the issue. If it relies on NHibernate 2nd level cache, you can turn it on and off simply overriding the CacheConcurrencyStrategy propery in your test class:
1: protected override string CacheConcurrencyStrategy
3: get { return "nonstrict-read-write"; }
Please remember that in the simple test project I provided, 2nd level cache is disabled by default. However, NHibernate official test project uses nonstrict-read-write caching strategy for every entity, because every "green" test should pass with caching enabled as well.
When NHibernate doesn't work as expected, the best way to describe the issue is providing a good unit test. NHibernate.LiteTest helps you writing tests that are so similar to the official ones to be directly integrable in the actual NHibernate trunk. So, if you think you've just discovered a bug,
Obviously, if you think you're good enough, no one will be offended if you submit a patch, too.
Good article, Crad!
I suggest executing the test cases against the trunk because it may already be fixed but not yet released.
If I am to summary what to do,
1. Download the trunk, use it always(even if one isn't going to create patch)
2. Search jira for similar issues, and if there is nothing similar and you still think that it is a bug to be fixed, create a jira issue.
3. Create NHxyzt folder(xyzt being the bug number).
4. Add your domain classes and then mappings(Mappings.hbm.xml)
5. Create the test code which illustrates the problem. Override OnSetUp and OnTearDown for your initial data and other initializations.
6. Run the test case see if it is failing.
7. If you know how to fix, then fix it in NHibernate project, if not, it still fine.
8. Create the patch against src folder using
Right Click "src" folder and go to Tortoise SVN->Create Patch. Make sure it doesn't contain any dll's(A test case doesn't need compiled file, source is fine) and contain the files you have modified, don't forget to include .csproj files.
9. Add the patch to Jira.
And I think we need to define coding standards.
Things like camelCase for fields, etc.
- The patch is better in a specific separate file.
About "Download the trunk" it is enough with "Download the last available version", for somebody "trunk" is an heavy world.
Another point is:
If you find a similar issue but it was closed, please, create a new one with your specific test if the old one was closed as fixed.
I think information contained in this article should be included in this post. This article describes exactly what should be done to get started creating a test for a jira issue and I dont believe this is referenced anywhere through out the wiki
Nobody is maintaining nhibernate.org domain. You should find here at NHForge for every NHibernate info.
Best regards
Dario,
My point was, that there is valuable information in this link,, that is not mentioned here and hopefully it can be pulled over into this post. For example there is information on how to configure nhibernate for your test environment without changing app.config.
@jnapier
Info on how configure NH are available here
The ways to configure NH is outside the scope of this blog-post
I suggest moving this post over to the Wiki. It covers an issue of central importance to NH and, obvious from comments above, many people are interested in improving upon it. Moving it over would make it possible to incorporate these, and other, improvements into the original article.
You are right.
Fabio, very good post.
And i agree that you expressed your opinion actively on NH-1818. That kind of bug report is not good, i wastes lots of NH supporters time.
The links to download the sample test projects () give Access Denied errors.
Please update this post as it seems test project is no longer available and i have no clue how to get trunk building.
I'd like to submit a bug and a fix :)
|
http://nhforge.org/blogs/nhibernate/archive/2008/10/04/the-best-way-to-solve-nhibernate-bugs-submit-good-unit-test.aspx
|
CC-MAIN-2014-10
|
refinedweb
| 1,836
| 59.53
|
Ok, I think I get the gist of the 405 thing a little.
When I do "PythonHandler mod_python.publisher" in httpd.conf I am telling
it to use the publisher.py (in the lib of the source download). At least
this is my guess :-)
I am new to this, so I suppose I didn't understand before. But it looks
like I really shouldn't be writing to the request object (*if*) I am going
to use the mod_python.publisher. All I should do is return something
interesting from my functions. I am beginning to understand how
publisher.py works a bit better now.
from publisher.py
def handler(req):
req.allow_methods(["GET", "POST"])
if req.method not in ["GET", "POST"]:
raise apache.SERVER_RETURN, apache.HTTP_METHOD_NOT_ALLOWED
So that is why I am getting a 405 on HEAD requests. I really like to do
HEAD requests and I have a feeling that search engine spiders like that
sort of activity as well. Maybe publisher.py should allow it? Does anyone
know why this method was forbidden?
--
Waitman Gobble
(707) 237-6921
|
http://modpython.org/pipermail/mod_python/2005-November/019529.html
|
CC-MAIN-2018-39
|
refinedweb
| 180
| 70.7
|
You’re sitting at your desk, glaring at your monitor, but it glares back at you with equal determination.
Every change you make introduces new bugs, and fixing a bug causes another bug to pop up.
You don’t understand why things are randomly breaking, and the lines of code just increase every day.
However, by coding in a rigorous and specific fashion, you can prevent many of these issues simply by being slightly paranoid. This paranoia can save you hours in the future, just by dedicating a few extra seconds to include some additional safeguards.
So without further ado, let’s jump right into the top five tips for safer code.
1. Stop Accepting Garbage Input
The common phrase “Garbage in, Garbage out” is one that rings strongly with many programmers. The fact is, if you accept garbage input, you’re going to pass out garbage output. If your code has any modularity at all, then something like this will likely happen :
def foo(input): do_stuff def bar(input): do_other_stuff garbage_input = 'Hi. I'm garbage input.' some_variable = foo(bar(garbage_input))
As you call foo and bar and other functions, all of which depended on garbage_input, you find that everything has turned into garbage. As a result, functions will start throwing errors a few dozen passes down the line, and things will become very difficult to debug.
Another common mistake is attempting to correct the user’s input in potentially ambiguous cases, which leads to the second tip.
2. Don’t Try to Correct Garbage Input
Let’s take an example scenario :
Imagine you had a box that exported values from 0 to 1 on a display, depending on the number the user passed in.
One day, you suddenly get a value of 1.01, a value slightly higher than the maximum. Now, this should raise a red flag for most programmers. However, some programmers resort to doing the following :
def calculateValue(temperature): do_calculations def getBoxValue(temperature): if calculateValue(temperature) > 1 : return 1 elif calculateValue(temperature) < 0 : return 0 else: return calculateValue(temperature)
The technique shown above is known as clamping, which is basically restricting the value to a certain range. In this case, it is clamped to 0 and 1. However, the problem with the above example is that it is now impossible to debug the code.
If the user passed in bad input, you would get a clamped answer, instead of an error, and if the calculateValue function was buggy, you would never know. It could be slightly inflating the value, and you would still never know, because the values would be clamped.
As an exaggerated example, if calculateValue returned 900,000,000, all you would see is “1”. Instead of embracing and fixing bugs, this tactic throws them under the carpet in the hopes that no one will notice.
A better solution would be :
def calculateValue(temperature): do_calculations def getBoxValue(temperature): if(calculateValue(temperature) > 1 or calculateValue(temperature) < 0): raise ValueError('Output is greater than 1 or less than 0.') else: return calculateValue(temperature)
If your code is going to fail, then fail fast and fix it fast. Don’t try to polish garbage. Polished garbage is still garbage.
3. Stop Double Checking Boolean Values in If Statements
Many programmers already adhere to this principle, but some do not.
Since Python prevents the bug caused by double checking a boolean value, I will be using Java, as the bug can only happen in languages where assignment is possible in if statements.
In a nutshell, if you do this :
boolean someBoolean = true; if(someBoolean == true) { System.out.println('Boolean is true!'); } else { System.out.println('Boolean is false!'); }
In this case,
if(someBoolean == true)
Is exactly equivalent to :
if(someBoolean)
Aside from being redundant and taking up extra characters, this practice can cause horrible bugs, as very few programmers will bother to glance twice at an if statement that checks for true/false.
Take a look at the following example.
boolean someBoolean = (1 + 1 == 3); if(someBoolean = true) { System.out.println('1 + 1 equals 3!'); } else { System.out.println('1 + 1 is not equal to 3!'); }
At first glance, you would expect it to print out “1 + 1 is not equal to 3!”. However, on closer inspection, we see that it prints out “1 + 1 equals 3!” due to a very silly but possible mistake.
By writing,
if(someBoolean = true)
The programmer had accidentally set someBoolean to true instead of comparing someBoolean to true, causing the wrong output.
In languages such as Python, assignment in an if statement will not work. Guido van Rossum explicitly made it a syntax error due to the prevalence of programmers accidentally causing assignments in if statements instead of comparisons.
4. Put Immutable Objects First In Equality Checks
This is a nifty trick that piggy backs off the previous tip. If you’ve ever done defensive programming, then you have most likely seen this before.
Instead of writing :
if(obj == null) { //stuff happens }
Flip the order such that null is first.
if(null == obj) { //stuff happens }
Null is immutable, meaning you can’t assign null to the object. If you try to set null to obj, Java will throw an error.
As a result, you can prevent the silly mistake of accidentally causing unintentional assignment during equality checks. Naturally, if you set obj to null, the compiler will throw an error because it’s checking a null object when it expects a boolean.
However, if you are passing around methods inside the if statement, it can become dangerous, particularly methods that will return a boolean type. The problem is doubly bad if you have overloaded methods.
The following example illustrates this point :
final int CONSTANT_NUM = 5; public boolean foo(int x){ return x%2 != 0; } public boolean foo(boolean x){ return !x; } public void compareVals(int x){ if(foo(x = CONSTANT_NUM)){ //insert magic here } }
In this example, the user expects foo to be passed in a boolean of whether or not x is equal to a constant number, 5.
However, instead of comparing the two values, x is set to 5. The expected value if the comparison was done correctly would be false, but if x is set to CONSTANT_NUM, then the value will end up being true instead.
5. Leave Uninitialized Variables Uninitialized
It doesn’t matter what language you use, always leave your uninitialized variables as null, None, nil, or whatever your language’s equivalent is.
The only exception to this rule is booleans, which should almost always be set to false when initialized. The exception is for booleans with names such as keepRunning, which you will want to set initially to true.
In Java’s case,
int x; String y; boolean z = false;
In particular, for Python especially, if you have a list, make sure that you do not set it to an empty list.
The same also applies to strings.
Do this :
some_string = None list = None
Not this :
some_string = '' list = []
There is a world of a difference between a null/None/nil list, and an empty list, and a world of a difference between a null/None/nil string, and an empty string.
An empty value means that the object was assigned an empty value on purpose, and was initialized.
A null value means that the object doesn’t have a value, because it has not been initialized.
In addition, it is good to have null errors caused by uninitialized objects.
It is unpleasant to say the least when an uninitialized string is set to “” and is prematurely passed into a function without being assigned a non-empty value.
As usual, garbage input will give you garbage output.
Conclusion
These five tips are not a magical silver bullet that will prevent you from making any bugs at all in the future. Even if you follow these five tips, you won’t suddenly have exponentially better code.
Good programming style, proper documentation, and following common conventions for your programming language come first. These little tricks will only marginally decrease your bug count. However, they also only take about an extra few seconds of your time, so the overhead is negligible.
Sacrificing a few seconds of your time for slightly safer code is a trade most people would take any day, especially if it can increase production speed and prevent silly mistakes.
One thought on “Five Great Practices for Safer Code”
[…] Read the full post at Safer Code. […]
|
https://henrydangprg.com/2016/07/30/five-great-practices-for-safer-code/comment-page-1/
|
CC-MAIN-2019-04
|
refinedweb
| 1,407
| 62.07
|
Does anyone know how I can fix this?
Does anyone know how I can fix this?
Wow, that's really weird... I could have sworn it didn't work the other day! Perhaps I was clicking on another node instead of pressing "Enter".
I have a second problem, however, regarding sorting...
Hi,
I've been messing around with JTrees recently and I'm finding it rather difficult to get them to work the way I want. Firstly is to do with being able to rename nodes; I had it working at one...
Hey,
Thanks for all the info guys - for now I'm using a JTextArea until I can get some other problems sorted out but this will come in very handy later!
- Danjb
Hehe, thanks again for looking into it anyway. I might just use a JTextArea, at least for now, but eventually I'd like to be able to display slightly more elaborate documents...
If anyone can suss...
Of course, and thanks for looking into it:
package gui;
import java.awt.Dimension;
import javax.swing.JEditorPane;
import javax.swing.JFrame;
import javax.swing.JPanel;
Does anyone have any insight on this?
Thanks for your reply - I'm glad it's not just me being stupid.
I tried following those instructions, though, and got stuck at the first hurdle; "We override ParagraphView class with own...
Hi,"...
|
http://www.javaprogrammingforums.com/search.php?s=4b58317146ec0a04fbb03fabc7b1caa4&searchid=1364160
|
CC-MAIN-2015-06
|
refinedweb
| 227
| 77.13
|
I have an Aurelia project which is divided in several projects. My folder structure looks like this
/myApp
/.idea
/myApp
/myModule1
/myModule2
/myModule3
Each of those is a standalone jspm package (each have their own package.json file). Module myApp refers to myModule1, myModule1 and myModule3 via "jspm install".
in myApp/file.js I can do this:
import {Foo} from 'myModule1' //no file path here; this is referencing the logical name myModule1
but a WebStorm inspector warns me that myModule1 is not installed, therefore it can't give me autocomplete.
So my question is : How can I tell WebStorm about my internal libraries so that it recognizes their name and provide autocomplete?
Thanks
there is currently no way to do this, please vote for
So how are we suppose to do this?
Are you using JSPM? Is it a TypeScript or javascript project? Sample project would be helpful
|
https://intellij-support.jetbrains.com/hc/en-us/community/posts/207275195-How-to-reference-internal-libraries?sort_by=votes
|
CC-MAIN-2019-39
|
refinedweb
| 148
| 65.32
|
The Vital Guide to Ruby Interviewing
The Challenge
In his book Seven Languages in Seven Weeks, Bruce Tate compares Ruby to Mary Poppins: “She’s sometimes quirky, always beautiful, a little mysterious, and absolutely magical.” If we really had to pick a character to compare with this language, we could hardly make a better choice. The comparison, of course, stems from the philosophy of Ruby’s design: the language should make programmers happy.
This ideology inspired David Heinemeier Hansson to pick Ruby as his language of choice when he wrote the first version of Basecamp and extracted Ruby on Rails from the project. Since the language became popular due to the success of Rails, it’s common to hear people misusing “Ruby” and “Rails” synonymously, or to miscategorize Ruby as a “web language”.
As experienced Rubyists know, the language is not in any way limited to web development, and can be used for almost anything: writing native smartphone apps, data-processing scripts, or as a tool to teach people to program.
Even though the language markets itself as being developer-friendly, it can provide a few surprises for programmers coming from the other languages, because it changes the semantics of many familiar concepts in unexpected ways.
Mastering Ruby can be hard for a few reasons, and the popularity of Rails is one of them. Namely, many developers learned just enough Ruby to start using Rails and stopped exploring the language after that. Due to Ruby’s magical typing scheme, it can be hard for these developers to understand what is pure Ruby and what is an extension built into Rails. Given the many extensions Rails provides, the motivation to study the basics of the language is diminished.
Additional complexity lies in the multiple faces of Ruby. While it is primarily an object-oriented language, it also allows writing programs in the functional programming style, which can be leveraged for improved expressiveness and code readability. This means that Ruby provides many ways to do a single thing, and thus the “Class A” Ruby programmer needs to understand all of the different personalities of the language.
It doesn’t help that today we’ve got multiple implementations of Ruby, including MRI, JRuby, Rubinius, and mruby. Thus, to make good decisions and judgments, Ruby developers need to be, at the very least, aware of the different implementations, and their strengths and weaknesses.
The following guide gives you some questions for inspiration when preparing an interview for top-notch Ruby developers. You should not assess candidates based purely on their ability to “correctly” answer each question. That would be missing the point, because not every top candidate will actually know all the details about the language, nor does knowing all the details guarantee you the best developer. Instead, use the questions to see how the candidate thinks about the problem at hand, how they work through difficult problems, and how they weigh alternatives and make judgment calls. For example, if the candidate doesn’t know the answer, ask how they would get it, provided they had a computer with internet access and a Ruby interpreter installed. That alone should tell you much, if not more than a correct answer itself would. You should also check out Toptal’s general guide on hiring candidates, published on our blog: In Search of the Elite Few.
Note that this guide focuses on pure Ruby. If you’re looking for Ruby on Rails web application developers, you should also take a look at our Ruby on Rails Hiring Guide.
Welcome to Ruby
Consider this section a warm-up for the discussion: it will help you see if the candidate ever wrote anything more than a simple Rails app.
Q: What are the differences between classes and modules? What are the similarities?
Modules in Ruby have two purposes: namespacing and mixins.
Consider an application that processes maps with points of interest. Two programmers working on this app could come up with two different classes and give them the same name. Let’s say the first one creates a class
Point, which represents a point in a plane defined by x and y coordinates. The programmer then uses this class when rendering maps as images. The other programmer’s
Point class models a point of interest on the map and has attributes specifying the latitude, the longitude, and the description.
When a Ruby interpreter finds multiple class definitions with the same name, it assumes that the programmer’s intent is to “reopen” the class defined before and add the new methods to the previous class. This could cause various types of issues down the road, so it’s best to put unrelated parts of an app in different namespaces in order to reduce a chance of naming collisions. To do this in Ruby we can put separate behavior in different modules, e.g.
Rendering and
Geo:
module Rendering class Point < Struct.new(:x, :y) end end module Geo class Point < Struct.new(:lat, :long, :description) end end
Now the full names of the classes would change to
Rendering::Point and
Geo::Point so there would be no conflict.
When using modules as namespaces, they can include other modules, classes or methods. Later, one can include the module in other contexts (different modules or classes) to remove the need to fully spell-out their names.
Classes can contain modules, other classes, and methods, but one can’t include them in the same way they would include a module.
As mixins, modules often group methods that can be included into the other classes. However, we cannot create instances of modules. Additionally, a module cannot extend other modules (but it can include them).
Classes can combine methods, they can be instantiated, and they can be extended by other classes.
In Ruby, almost everything is an object, so classes and modules are objects as well. Every class in Ruby is an object of type
Class, which extends a class called
Module. Looking at this inheritance hierarchy, we can clearly see that modules provide just a subset of features provided by classes.
Q: How does Ruby look up a method to invoke?
Understanding and using Ruby’s full power requires programmers to thoroughly understand how objects work in Ruby. This is important not only for wielding a great power like metaprogramming, but also to understand concepts that might look weird at first, like definition of the class methods.
Given this snippet of code:
mike = Actor.new(first_name: 'Mike', last_name: 'Myers') mike.full_name
Where would Ruby look for a definition of the method
full_name?
The first place where Ruby would look up the method is in the object’s own metaclass or eigenclass. This is a class that backs every single object and that contains methods defined on that object directly, distinct from all other instances.
For example, the following definition adds a method
full_name to the metaclass of object
mike:
def mike.full_name "#{first_name} #{last_name}" end
Since the metaclass is a class specific to this object, no other instances of class
Actor would have this method.
If Ruby can’t find a method in the object’s metaclass, it would start looking for the method in the ancestors of the object’s class (i.e. the ancestors of
Actor).
So, what are the ancestors of a class? We can ask Ruby to tell us!
Actor.ancestors # => [Actor, Object, Kernel, BasicObject]
We can see that the list of ancestors of any class in Ruby begins with the class itself, includes all ancestor classes (
Object, ‘BasicObject’), but also modules that are included in any of the classes in the inheritance hierarchy (e.g.
Kernel). This is something that a Ruby programmer should understand.
Since Ruby 2.0, it’s up to the programmer to decide where to place the modules in the ancestors list.
Let’s take a look at two examples. We’ll assume this is Ruby version 2.0 or above:
Example 1: Including the
FullName module into the class
Actor.
class Actor < Person include FullName end
If we now look up the ancestors list of the class
Actor, we’ll see that the module
FullName appears between the
Actor and the
Person classes:
Actor.ancestors # => [Actor, FullName, Person, Object, Kernel, BasicObject]
Example 2: Prepending the
FullName module to the class
Actor.
class Actor < Person prepend FullName end
By prepending
FullName, we’ve told Ruby to put the module before the class itself in the ancestors list:
Actor.ancestors # => [FullName, Actor, Person, Object, Kernel, BasicObject]
If Ruby searches the entire ancestor list and can’t find the method by the given name, Ruby will internally send another message (method call) to the object:
method_missing?. The system will repeat the lookup for this method, and will find it at least in the
Object class (sooner, if the programmer has defined it in an earlier ancestor) and execute it.
Q: Does Ruby support multiple inheritance? Is it needed at all?
Like other modern languages, Ruby supports only single inheritance, and that’s quoted as a feature by Yukihiro Matsumoto, the language’s creator.
There’s no need for Ruby to support multiple inheritance, since everything that could be done with it can be achieved with duck-typing and modules (i.e., mixins).
Since Ruby is a dynamically-typed language, there’s no need for objects to be of a specific type in order for a program to run. Instead, Ruby leverages duck-typing: if an object quacks like duck, and walks like a duck, it’s a duck. That is, we can ask any object if it knows how to respond to a message. Or we can simply send a message and trust the object to figure out how to handle it.
Another use of multiple inheritance is to share code among different classes which cannot be modeled as a chain. To achieve that, we can write the code we want to share as a method in a module and include it into other classes as needed. The concept that allows grouping of methods to be included in multiple classes is the definition of a mixin.
We should note that including a module into a class adds it to the class’ ancestors list:
class Actor < Person include FullName end Actor.ancestors # => [Actor, FullName, Person, Object, Kernel, BasicObject]
Which means that once the module has been mixed in, there’s no difference between the methods that come from a superclass, and those coming from a mixin.
Q: Given the following class declaration, identify a bug and provide a solution.
class Worker attr_reader :data def initialize(d) data = d end def call # do a process that requires @data end private def data=(d) @data = clean(d) end def clean(d) # return sanitized data end end
The problem is in the
initialize method, which tries to assign the argument
d using the private attribute writer
data=. However, the setter won’t get invoked, because Ruby will treat
data = d as a local variable initialization. When Ruby encounters an identifier beginning with a lowercase character or an underscore on a left-hand side of an assignment operator, Ruby will create and initialize a local variable. Please note that this behavior is inconsistent with the way Ruby handles the same identifiers in other contexts: if the identifier does not reference a defined local variable, Ruby will try to call a method with the given name.
To make it clear that we want to call the writer method (i.e. the method ending with the
= character), we need to prepend the name with
self:
def initialize(d) self.data = d end
This may seem counter-intuitive, given the rule that private methods cannot be called with explicit receiver (even if it’s
self). However, the rule comes with an exception: the private writer methods (i.e., methods ending with
=) can be invoked with
self.
Another way to fix the bug would be to directly assign the value to the instance variable
@data in the initializer:
def initialize(d) @data = clean(d) end
Seasoned Rubyist, or a seasoned programmer learning Ruby?
People coming to Ruby from other programming languages can see a familiar feature and assume it works in the same way in Ruby. Unfortunately, that’s not always the case, and there are a few concepts in Ruby that can be surprising to developers who have experience with some other language. The following questions aim to separate experienced Rubyists from people who blindly follow familiar concepts and apply them to Ruby.
Q: Explain the difference between
throw/
catch and
raise/
rescue in Ruby.
Like most modern Object-Oriented Programming (OOP) languages, Ruby has a mechanism for signaling exceptional situations and handling those exceptions in other blocks of code. To signal an error, a Ruby programmer would use the
raise method, which aborts the execution of the current block of code and unwinds the call stack looking for a
rescue statement that can handle the raised exception.
In that sense, the
raise method performs the same job as the
throw statement does in C++ and languages inspired by it, whereas
rescue corresponds to the
catch block. These operations should be used for signaling the exceptional state and handling it, they should not be used for normal flow control.
However, Ruby also has
throw and
catch methods, which can put newcomers off balance, because they are used for flow control, and should not be used for error handling.
You can think of
throw as a GOTO statement and
catch as a method that sets up a label. However, in contrast to traditional GOTO, which can jump to any other point in the program,
throw can only be used inside a
catch block, as we’ll demonstrate momentarily.
It’s important to understand the syntactical differences:
rescue is a keyword, not a method we could override (
raise is a method, though). We can combine multiple
rescue statements one after another to handle different types of errors, and we always put the
rescue keyword after a block that will, potentially, raise an exception.
For instance:
begin response = Net::HTTP.get(uri) rescue Timeout::Error puts "A timeout occurred, please try later." rescue Net::HTTPBadResponse puts "Server returned a malformed response." rescue puts "An unknown error occurred." end
On the other hand,
catch is a method that accepts two arguments: a symbol that can be caught (a “label”), and a block inside which we can use
throw. The
throw method can also accept two arguments: the first is mandatory, and should be a symbol that matches the
catch label to which the program should jump; the second is optional and will become the return value of the
catch statement. The
throw/
catch pair can be useful when we need to break out of a nested loop, like in this example of looking for a free product:
free_product = catch(:found) do shops.each do |shop| shop.products.each do |product| if product.free? throw :found, product end end end end if free_product puts "Found a free product: #{free_product}!" end
Q: Is there a difference between the Boolean
&&/
|| operators and the written versions
and/
or?
There is definitely a big difference between these two. Even though
&& and
and (or
or and
||) are semantically equal (both being short-circuit Boolean operators), they have different operator precedence.
The English operators
and and
or have very small precedence. Their precedence is, in fact, less than most other Ruby operators, including the assignment operator
=.
Considering that these operators are the Boolean operators in Python, seasoned Pythonistas can misuse them in Ruby, which could produce unexpected logical errors. Consider this idiom that’s very often used to access an attribute of an object that may not be initialized:
actors = [ Actor.new(first_name: 'Mike', last_name: 'Myers') ] name = actors[0] && actors[0].name # => "Mike" puts name # => Mike
If the
actors array is empty, or contains
nil as a first element, the Boolean statement will immediately resolve to this value, and the
name variable will be
nil. If the first element of the array is truthy, however, we will then send the message
name to it, and assign the result to the variable
name.
However, if we used the English version of the operator, we’d get a different result:
actors = [ Actor.new(first_name: 'Mike', last_name: 'Myers') ] name = actors[0] and actors[0].name # => "Mike" puts name # => #<Actor:0x007fa07a1b4a00>
We can see that the result of the expression in the second line and the value assigned to the variable in the end are different. Because
= binds tighter than
and, the operands were grouped around the assignment first, and only then around the
and operator. We can illustrate this with parentheses:
(name = actors[0]) and actors[0].name
The reason to actually have both operators in the language is to allow the use of
and and
or as control flow commands in Perl style.
Q: Given below is a snippet of code that calculates a total discounted value among all products that have the discount of 30% or more in all shops. Rewrite this snippet using basic functional operations (map/filter/reduce/flatten/compact) if we assume
shops is an array of
Shop objects loaded in the memory, where each
shop references a the list of
products in the memory. Which version would you keep in the production-ready code?
total_discount = 0 shops.each do |shop| shop.products.each do |product| if product.discount >= 0.3 total_discount += product.discount * product.price end end end
Since we want to transform a list into a single value, this seems to be a job for the
reduce method (or
inject, an alias of
reduce):
total_the discount = shops.reduce(0) { |total, shop| total + shop.products.reduce(0) { |subtotal, product| if product.discount >= 0.3 subtotal + product.discount * product.price else subtotal end } }
The whole calculation is now a single expression and there’s no need to initialize the accumulator variable in a separate line.
However, this approach might be somewhat harder to read, as we’ve got nested
reduce operations. To simplify the code, we could first create a list of all products and then reduce it by chaining operations:
total_discount = shops. flat_map { |s| s.products }. reduce(0) { |total, product| if product.discount >= 0.3 total + product.discount * product.price else total end }
It’s possible to further improve readability if we are willing to sacrifice some performance by splitting different kinds of operations into separate blocks:
total_discount = shops. flat_map { |s| s.products }. select { |p| p.discount >= 0.3 }. map { |p| p.discount * p.price }. reduce(0) { |sum, d| sum + d }
It is important to repeat that we’ve hindered the performance with this change: the new code will iterate multiple times through the same list of products. If this code is not in a performance-critical path, this decrease could be justified by the improvements in readability.
Algorithmically speaking, both operations are linear in time, so for a very large number of products, the style of the code may not be of crucial importance.
On the other hand, we’ve gained code that declaratively lists steps in the processing of the initial list, similarly to an SQL query.
A Ruby guru should know how to reason about the change, measure the performance of both solutions and make a decision relevant to the context. Your goal here should not be to wait for the “correct” answer, but instead to listen to the programmer to see how she or he judges the various options.
Q: What’s the difference between
public,
protected and
private visibility in Ruby?
Method visibility specifiers is an area that often trips up newcomers to Ruby who are familiar with almost any other OOP language.
Public visibility is the default in Ruby, and it behaves just like it does in any other language. Public methods can be invoked by any object that can access the method’s object.
However,
private visibility is different. In Ruby,
private methods can be directly called only if we don’t explicitly specify the message receiver.
Consider this implementation of finding the n-th Fibonacci number using recursion with memoization:
class MemoizedFibonacci def initialize @memo = {} end def get(n) @memo[n] ||= calculate(n) end private def calculate(n) return 1 if n <= 1 get(n-1) + get(n-2) end end
We have a public method
get and a private method
calculate. The public method first checks if the result was previously calculated by looking it up in the
@memo hash. If it wasn’t, it will call the
calculate method and store the result in the hash.
Let’s try to make a slight modification to the
get method:
def get(n) @memo[n] ||= self.calculate(n) end
Newcomers know that
self in Ruby is equivalent to
this in languages like C++, C# and Java, so they are led to believe this change would have no effect. However, we’ve now added the explicit receiver,
self, to the message
calculate. Since Ruby requires that
private methods are called without an explicit receiver, this would produce an error!
Another unexpected side effect of this rule is that private methods can be called from subclasses. In fact, many things we consider “keywords” in Ruby, are nothing but private methods in the module
Kernel, which is included by the
Object class and which are therefore inherited by every Ruby object. For example, when we raise an exception using
raise, we’re actually calling a private method of a superclass!
Ruby is an open-minded language, so it doesn’t even let developers lock down their privates, so to speak.
This leaves the
protected visibility. It behaves like
public, but with an exception: protected methods can be called only from the methods of the same class or any of its subclasses. You will notice that in this sense,
protected behaves in a similar manner to the
private visibility of other languages. This means that
protected method visibility should be used in classes which want to hide their state, but allow calling the method from other methods in the class that need it to produce copies or implement comparisons.
The Big Picture
Q: Name at least three different implementations of Ruby. Discuss the differences among them.
The standard Ruby implementation is called MRI (short for Matz’s Ruby Interpreter) or CRuby. As the names suggests, it was written by Yukihiro “Matz” Matsumoto in C. Being the primary implementation of Ruby by its author, it can’t be a surprise that it grows together with the language. Even though there is an ISO specification of the language (ISO/IEC 30170:2012), the spec was already obsolete with the release of Ruby 2.0. Thus, all new language features will first appear in MRI, and then they may get implemented in other interpreters. Being written in C, MRI can interoperate with other C code, and run gems written in C as well.
One of the most commonly cited problems with this implementation is the lack of support for true parallelization, since MRI Ruby supports only green threads and depends on a Global Interpreter Lock (GIL).
Rubinius is another implementation, based on LLVM, and written in C++ and Ruby. It improves concurrency by using native threads and a just-in-time (JIT) compiler. Additionally, the core library is mostly written in Ruby itself, making it easier to understand the internals, especially to folks not very comfortable with C.
The coupling between MRI and the language makes it hard to keep other implementations of Ruby up to date, which means they may lack features found in the recent versions of MRI. One of the first things the Rubinius team made was RubySpec: a runnable specification of the Ruby language, which would allow developers of the language’s forks to check their implementation against the “standard”. Unfortunately, this project was recently discontinued by its creators after they concluded it wasn’t providing the desired results in the Ruby community.
Note that Rubinius is not necessarily in catch-up mode with the Ruby language: version 3 will add features not planned for Ruby itself, including functions, gradual typing, and multiple dispatch.
Those who need to interface Java code with Ruby may find JRuby helpful. Like Rubinius, it offers improved concurrency options by relying on native threads, and JIT compilation of bytecode into machine code. As a bonus, it provides interoperability with existing Java Virtual Machine (JVM) code (you can use native Java classes and libraries) and the possibility to run Ruby on any JVM.
Since JRuby relies upon JVM, it means it can’t use Ruby gems written in pure C. While this implementation improves runtime speeds as a rule, it introduces slow starting time of applications.
Another noteworthy implementation is mruby: an embeddable subset of the Ruby ISO standard. Matz himself is leading its development, with the goal of enabling Ruby to run as an embedded language inside existing apps and games, providing scriptability and automation, and thus challenging Lua.
You can read much more about other Ruby implementations in our blog post about Ruby interpreters and runtimes.
Wrap-Up
A perfect Ruby developer isn’t one that will blindly optimize the code so it runs a few milliseconds faster or consumes a few megabytes less memory than it used to. The perfect candidate will know how to achieve that when performance is of essence, but will be able to recommend alternatives if it would sacrifice too much readability and maintainability. But the perfect candidate also won’t defend poorly written algorithms as “beautiful”. In Ruby, one could say, perfection is in compromises and good judgment calls. Remember that when talking to your candidates, and listen to their reasoning.
Happy interviewing!
|
https://www.toptal.com/ruby
|
CC-MAIN-2017-43
|
refinedweb
| 4,287
| 52.19
|
My baby is 12!
By user12625760 on Nov 05, 2004
It was Guy Fawkes day, 12 years ago that I created what was to become the oldest NIS+ name space in the world , unless of course you know otherwise.
It's time of birth is immortalised in the creation time of the directory objects:
# niscat -o org_dir Object Name : "org_dir" Directory : "hotline.uk.sun.com." Owner : "podtwo.hotline.uk.sun.com." Group : "admins.hotline.uk.sun.com." Access Rights : r---rmcdrmc-r--- Time to Live : 12:0:0 Creation Time : Thu Nov 5 13:33:46 1992 Mod. Time : Thu Mar 22 17:58:10 2001 Object Type : DIRECTORY Name : 'org_dir.hotline.uk.sun.com.' Type : NIS Master Server :
It was spanked into life on Solaris 2.1 and was in prime time use by 2.3.
I'm told the name space is still in daily use, though is largely restricted to the lab. Gone are the days when the whole UK hotline were using it, but it struggles on. I wonder if it will make it to be a teenager. I suspect Paul will kill it, but I won't hold that against him for long.
It is nice to see it still running, and working despite what the nay sayers said at the time, especially as it just started as somewhere for me to test things on my workstation.
While I still have a machine that can have a zone under my control we will keep this puppy running. For the cost of an IP address and a little memory we should hold onto this bit of history as you would a piece of art or antique. It is likely in the future the hotline nis+ namespace will merge with our own but by then we will have this safe for its next birthday.
Kinda scary though it is heading for its 13th birthday when we are intending to do this. Better be careful...
Posted by Paul Humphreys on November 05, 2004 at 04:43 AM GMT #
|
https://blogs.oracle.com/chrisg/entry/my_baby_is_12
|
CC-MAIN-2015-22
|
refinedweb
| 340
| 79.3
|
null : emp.email_address, member_status_id = emp.member_status_id.Value, member_type_id = emp.member_type_id.Value, // member_job_title_id = emp.member_job_title_id.Value, Marital = emp.Marital == null ? In other words, Single expects there to be one, only one, and always one item in the collection. I just saw this bird outside my apartment. FirstOrDefault shouldn't throw any exceptions.
FirstOrDefault is the safest, while Single is the most dangerous. Not the answer you're looking for? If I receive written permission to use content from a paper without citing, is it plagiarism? Jul 23, 2011 09:46 PM|Prashant Kumar|LINK int?
Posted 18 March 2011 - 04:30 PM I'm not sure I understand your reply but the SerGrant.ServiceKey portion of SerGrant.ServiceKey = from a in dc.GetTable
more hot questions question feed default about us tour help blog chat data legal privacy policy work here advertising info mobile contact us feedback Technology Life / Arts Culture / Recreation How to deal with a coworker that writes software to give him job security instead of solving problems? Have a try like this—— return (from q in entities1.Questions join qa in entities1.Questions_Assessments on q.QuestionID equals qa.QuestionID where (qa.AssessmentID!=null && qa.AssessmentID == assessmentid) select q.SequenceOrder).Max(); Reply johnjohn1231... Cannot Convert Int To Int Array C# An explicit conversion exists (are you missi...
null : emp.address2, city = emp.city == null ? How safe is 48V DC? Privacy statement Help us improve MSDN. Edited by Tarasiadis Miltos Friday, May 08, 2009 8:58 AM layout Friday, May 08, 2009 8:57 AM Answers 1 Sign in to vote Ok, so that means, as I posted above,
So User and OrdersPerHour would be user and ordersPerHour. How To Convert Int To Int Array In C# I try the second PriorityDesc = priorityDescription[t.priority ?? 0] and an error occured No, you shouldn't do both . Is there a name for the (anti- ) pattern of passing parameters that will only be used several levels deep in the call chain? You need to split the query in two queries.
It should be 'h.Kod == ', Your code is 'h.Kod ='. –Hannes Apr 15 '14 at 9:44 Right, I forgot it. –andrey.shedko Apr 15 '14 at 9:44 add a Was This Post Helpful? 0 Back to top MultiQuote Quote + Reply #12 gerdi New D.I.C Head Reputation: 0 Posts: 2 Joined: 11-July 12 Re: Linq query result to int? Cannot Implicitly Convert Type Int To Int C# Posted 18 March 2011 - 04:55 PM You do know what var means in the context of C#? Cannot Implicitly Convert Type 'int' To 'int ' Array Then no, this is bad practice.
For example: imie = d.imie.HasValue ? Join them; it only takes a minute: Sign up Cannot convert type 'System.Linq.IQueryable
BR Reply Decker Dong... All-Star 94110 Points 18123 Posts Re: "Cannot implicitly convert type 'int?' to 'int'. It should have in my mind, and I've tested in my console app for int?…… Could you be kind to tell me is that type of int? int?
Stored Procedure (Input, Output, Return Value) 10 Common Programming Mistakes Singly-Linked List, A Basic Example C# Methods Drawing Shapes and Strings Quick look at StreamWriting and Uploading using FTP Obtaining Database Cannot Implicitly Convert Int To Int Array In C# Figuring out why I'm going over hard-drive quota Mysterious creeper-like explosions Wait... static int OrdersPerHour(string User) { int?
An explicit conversion exists (are you missi... I haven't tried it yet, but this is good info. For non-nullable types, default values are predefined. Cannot Implicitly Convert Type 'system.linq.iqueryable Int ' To 'int' null : emp.address1, address2 = emp.address2 == null ?
Without opening the PHB, is there a way to know if it's a particular printing? I have no idea what to do with decimals or booleans. Browse other questions tagged linq or ask your own question. this contact form Your parameter and variables should be in camelCase, not PascalCase.
Permalink Posted 22-Apr-10 8:08am _Kunal Chowdhury_133.1K Rate this: Please Sign up or sign in to vote. array to an int array (not even using unsafe code), as an int? Browse other questions tagged c# executescalar or ask your own question. An explicit conversion exists (are you missing a...
Jul 25, 2011 01:17 AM|Decker Dong - MSFT|LINK Hello OP:) You can have a try like this: return (from q in entities1.Questions join qa in entities1.Questions_Assessments on q.QuestionID equals qa.QuestionID As your AssessmentID is nullable type and you are comparing it with not null variable. null : emp.first_name, middle_name_or_initial = emp.middle_name_or_initial == null ? null : emp.clock_num, site_id = emp.site_id.Value, email_address = emp.email_address == null ?
is not the same size as an int. So, we take your initial query: var serkey = from a in dc.GetTable
This working fine in other place. –andrey.shedko Apr 15 '14 at 9:21 1 @AgentFire Technically, there is nothing wrong with it. What is really curved, spacetime, or simply the coordinate lines? An explicit conversion exists (are you missi... All-Star 94110 Points 18123 Posts Re: "Cannot implicitly convert type 'int?' to 'int'.
Jul 26, 2011 10:27 PM|Decker Dong - MSFT|LINK Hello OP:) Show us your definination of your AssignmentId in your created model class, Plz. There are several articles on this subject on the net but they are currently above my knowledge of C# and WCF and I cannot find a single example that passes anything Visit our UserVoice Page to submit and vote on ideas! null : emp.city, state = emp.state == null ?
|
http://hiflytech.com/int-to/cannot-convert-int-to-int-linq.html
|
CC-MAIN-2017-39
|
refinedweb
| 933
| 69.18
|
A Backbone.js Require.js test driven workflow.
A Backbone.js and Require.js Test Driven Workflow inspired by Greg Franko's Backbone-Require-Boilerplate-Lite. Together we promote decoupling your JavaScript into modules, separating business logic from application logic using Collections/Models and Views, including non-AMD Compatible Third Party Scripts in your project, optimizing all of your JavaScript (minify, concatenate, etc), and unit testing your JavaScript all while minimizing the time it takes to perform monotonous tasks.
Website: cl0udc0ntr0l.github.io/generator-stacked
NPM: npmjs.org/package/generator-stacked
Repository: github.com/cl0udc0ntr0l/generator-stacked
Bugs: github.com/cl0udc0ntr0l/generator-stacked/issues
The yo man now asks you if you want to use MongoDB and Mongoose in your app. He does all the configuration for you and even sets up a sample schema. You just have to point the config file at your database and start coding!
You will have to install mongoDB if you dont have it installed already. This is how you do it.
Once you are done with that you have to point your config file
server/config/config.jsat your mongoDB install.
exportsconfig =listenPort: "1337"sessionSecret: "keyboard-cat" // You should change this while you are heredatabase:IP: "10.0.0.100" // Put your mongoDB IP here (no http/ https)name: "defaultDB" // Choose a name (the db will be created automatically)port: "27017" // 27017 is the default port. Only change this if you specified a different port for your mongoDB install.;
That's it! You can now create schemas and persist your data. A schema generator is coming soon.
When you create a new app, Stacked will create an event aggregator called Notifier that you can use to send messages across your entire app. It observes the pub/sub pattern. Meaning you can have multiple listeners for your event.
Include Notifier with require.js
define"jquery" "backbone" "events/Notifier"
Send the message
Notifiertrigger'myChanel.myMessage' optional params;
Recieve the message somewhere else
Notifieron'myChannel.myMessage';
Or even better! In your initialize method:
Notifieron'myChannel.myMessage' thismyFunction this;
and then add your own method
// your logic here// Dont forget to chain your methods!
The Stacked workflow is comprised of 8 tools for improving your productivity and satisfaction when building a web app: yo (the scaffolding tool), grunt (the build tool), npm (for sever side package management), mocha (for server side unit testing), bower (for client side package management), jasmine (for client side unit testing), backbone.js (for decoupling of data logic from application logic) and require.js (for making our code modular and maintainable).
Server Side
Client Side
npm install -g yo(yes it downloads them all)
npm install -g generator-stacked
Generate your app.
mkdir myApp && cd $_ yo stacked`
Stacked will ask you some questions to help you set up your app.
Your name is injected into package.json and bower.json as the author of the app.
Your github username is used to create repository paths in package.json and bower.json
This is the name of your app. It is complete repository paths and populate other files.
In the future I may use this varible to namespace the app.
I choose the name "MVC set" to represent a backbone.js Model, Collection, View and Template. The generator automaticlly includes all your files with require.js, so you just initalize and write!
When you First create your app, the MVC option will initalize a Backbone.js Router and Require.js Init file for you. It works a little differently in the MVC Subgenerator, which we will get to in a moment.
The path selector allows you to nest your Models, Views, Collections and Templates. This is to allow for a more organized enviornment. The path root is displayed for you
root -> public/js/app/[type]/ [type] being Model, View, Collection or Template. You can just continue after the forward slash
users/admin or
user\admin\ The trailing slash is optional. Again... Require.js includes stay intact.
Yeoman automatically checks for file collisions and will prompt you for action. You are not in danger :)
If you want to use LESS, which you should (default is yes), you can just hit enter and Stacked will automatically include the dependencies in package.json and create grunt tasks for compiling. If you decide you would rather stick with CSS or use SASS, Stylus etc... Stacked will strip those includes from all files so you are only using what you need.
Backbone-Require-Boilerplate-Lite ships with Jasmine on the client, but I also wanted to have the option to use a server side testing framework. I went with mocha because I feel it's the most flexible. Its very easy to use after you get it all set up. Luckily Stacked takes care of that for you.
After you make your selections, your app will be generated and all dependencies will be installed. The console will log out how to initialize your build.
To initalize your build type
grunt init A few things are happening here which are good to understand and leverage.
######BOOTSTRAP
The first thing the init task does is copy your bootstrap.css file from the bower install directory
public/js/libs/bootstrap/dist/css to
public/css
Leverage! If you want to edit your bootstrap styles you could edit
public/css/bootsrap.cssor you can
cd public/js/libs/bootsrap/lessedit the less files and rebuild bootstrap, then
cd your/apps/rootand re-initialize your build to pull the new css into your
public/cssdirectory.
######AVAILABLE BOOTSTRAP GRUNT TASKS
grunt test Run jshint and qunit
grunt dist-js Compile Bootstrap js
grunt dist-css Compile Less
grunt dist Compile Full Distribution
Be careful with updating boostrap with bower if you edit the less files. They will be overwriten with the new bootstrap install.
Next we pull in Font-Awesome CSS from bower. Any time a new version is released you can
bower update font-awesome from your app root to update the packages and then run
grunt init to re-initialize your app.
If you chose the less option, your
public/css/includes/less/custom.less file will be compiled to
public/css/includes/css/custom.css
Finally, All your JS is run through jshint and r.js. You can switch the production variable in
public/index.html to
true to use the minified build.
Back to our apps root directory now.
'grunt build' will lint your javascript and run r.js to compile you production build.
grunt test runs jslint and mocha tests
grunt server Starts up your express server and uses nodemon to listen for changes
Subgenerators create the components of your app that you use the most, Models, Collections, Views and Templates. There are three subgenerators that cover all bases.
yo stacked:model
The Model generator creates a Model and optionally, a collection
yo stacked:view
The view generator creates a View and optionally, a Template. There is no template generator because it really wouldn't save any time.
yo stacked:mvc
The mvc generator creates a full set of backbone components, minus the Router. You will have to include your new set in your existing Router manually (for now).
Stacked App structures are based on Backbone-Require-Boilerplate by Greg Franko, Nick Pack and Brett Jones.
I have made some enhancements to the server side.
server/API.jsfile for REST calls.
server/congif/config.js
Below is the documentation for Backbone-Require-Boilerplate.
Uses a large portion of the HTML5 Boilerplate HTML and CSS. As you continue down the page to the first
<script> tag, you will notice there is a
production local JavaScript variable that is used to communicate to your application whether you would like to load production or development CSS and JavaScript files.
The
loadFiles() method is then used to load all of the correct CSS and JavaScript files. Below is what get's included:
Production Mode
In production mode, your app's single minified and concatenated JavaScript file is loaded using Almond.js instead of Require.js. Your application's minfied common CSS file is also included.
Development Mode
In development mode, your app's non-minified JavaScript files are loaded using Require.js instead of Almond.js. Your application's non-minified common CSS file is also included.
Loader Methods
You will notice that the CSS files and the Require.js file are being included on the page via the
loadFiles() method (which uses the
loadCss() and
loadJS() methods internally). Require.js does not officially support loading CSS files, which is why I included the
loadCSS() method to asynchronously include CSS files. Loading CSS asynchronously also allows the flexibilty/mechanism to load different CSS files if a user is on a mobile/desktop device.
Feel free to use the
loadCSS()and
loadJS()methods to load any other dependencies your application may have that you do not want to use Require.js for.
This file includes your mobile Require.js configurations.
If we look at the Require.js configurations, we will see the first thing being configured are the paths. Setting paths allow you to define an alias name and file path for any file that you like.
Typically, you want to set a path for any file that will be listed as a dependency in more than one module (eq. jQuery, Backbone). This saves you some typing, since you just have to list the alias name, and not the entire file path, when listing dependencies. After all of the file paths are set, you will find the Shim configuration (Added in Require.js 2.0).
The Shim configuration allows you to easily include non-AMD compatible JavaScript files with Require.js (a separate library such as Use.js was previously needed for this). This is very important, because Backbone versions > 0.5.3 no longer support AMD (meaning you will get an error if you try to use both Require.js and the latest version of Backbone). This configuration is a much better solution than manually editing non-AMD compatible JavaScript files to make sure the code is wrapped in a
define method. Require.js creator James Burke previously maintained AMD compatible forks of both Backbone.js and Underscore.js because of this exact reason.
shim:// Backbone"backbone":// Depends on underscore/lodash and jQuery"deps": "underscore" "jquery"// Exports the global window.Backbone object"exports": "Backbone"
The Shim configuration also takes the place for the old Require.js
order plugin. Within the Shim configuration, you can list files and their dependency tree. An example is jQuery plugins being dependent on jQuery:
shim:// Twitter Bootstrap plugins depend on jQuery"bootstrap": "jquery"
You do not need a shim configuration for jQuery or lodash because they are both AMD compatible.
After Require.js is configured, you will notice the
require method is called. The
require method is asynchronously including all of the files/dependencies passed into the first parameter (jQuery, Backbone, Lodash, Router, etc) into the page.
Finally, a new router instance is instantiated to allow you to use Backbone's routing mechanism (keep reading below for more clarification).
You don't need to instantiate a new router instance if you aren't using a Backbone Router class.
This file starts with a define method that lists jquery, backbone, and View.js as dependencies.
It is best practice to list out all of your dependencies for every file, regardless of whether or not they expose global objects and are already included in the page. This is also especially important for the Require.js optimizer (which needs to determine which files depend on which other files).
If your dependencies do not expose global objects, then it is absolutely mandatory to list it as a dependency, since Require.js does not allow global variables (meaning your modules are private and cannot be accessed by other modules or code without explicitly listing them as dependencies).
The rest of the file is a pretty standard Backbone.js Router class:
There is currently only one route listed (which gets called if there is no hash tag on the url), but feel free to create more for your application.
You must keep the
Backbone.history.start()method call, since this is what triggers Backbone to start reacting to hashchange events.
When your default route is invoked, a new View instance is created, which calls the render method immediately to append the header template to the page.
View.js starts with a define method that lists all of its dependencies.
The rest of the file is a pretty standard Backbone.js View class:
Backbone.js View's have a one-to-one relationship with DOM elements, and a View's DOM element is listed in the
el property. After the
el property is set, the View's model attribute is set to a new instance of the Model returned by Model.js (which was listed at the top as a dependency). Next, the View's
render method is called within the View's constructor, aka
initialize() method, and the View's
template property is set and appended to the page using the Underscore.js
template method ported to Lodash.
If you have read all of the documentation up until this point, you will most likely have already noticed that lodash is being used instead of Underscore.js. Apart from having a bit better cross-browser performance and stability than Underscore.js, lodash also provides a custom build process. Although I have provided a version of lodash that has all of the Underscore.js methods you would expect, you can download a custom build and swap that in. Also, it doesn't hurt that Lodash creator, John-David Dalton, is an absolute performance and API consistency maniac =)
Next, you will find an
events object. Here is where all of your View DOM event handlers associated with the HTML element referenced by your View's
el property should be stored. Keep in mind that Backbone is using the jQuery
delegate method, so it expects a selector that is within your View's
el property. I did not include any events by default, so you will have to fill those in yourself. Below is an example of having an events object with one event handler that calls a View's
someMethod() method when an element with a class name of someElement is clicked.
// View Event Handlersevents:"click .someElement": "someMethod"
I am also declaring a
render method within the View. Backbone expects you to override the
render method with your own functionality, so that is what I did. All my
render method does is append the View's template to the page.
You do not need to use Underscore.js templates. In fact, you don't need to use templates at all. I just included them so you would understand how to use them.
Finally, I am returning the View class.
This file includes a template that is included via the Require.js text plugin. Templates are typically a useful way for you to update your View (the DOM) if a Model attribute changes. They are also useful when you have a lot of HTML and JavaScript that you need to fit together, and instead of concatenating HTML strings inside of your JavaScript, templates provide a cleaner solution. Look at Underscore's documentation to read more about the syntax of Underscore.js templates.
Model.js starts with a define method that lists jquery and backbone as dependencies.
The rest of the file is a pretty standard Backbone.js Model class.
Like other Backbone.js classes, there is an
initialize() method that acts as the Model's constructor function. There is also a defaults object that allows you to set default Model properties if you wish.
Finally, The Backbone.js
validate method is provided for you. This method is called any time an attribute of the model is set. Keep in mind that all model attributes will be validated (once set), even if a different model attribute is being set/validated. This does not make much sense to me, so if you prefer only the Model attributes that are currently being saved/set to be validated, then use the validateAll option provided by Backbone.validateAll.
Finally, a new Model class is returned.
Collection.js starts with a define method that lists jquery, backbone, and UserModel.js as dependencies.
The rest of the file is a pretty standard Backbone.js Collection class that is used to store all of your Backbone Models. The Collection model property is set to indicate that all Models that will be within this Collection class will be of type Model (the dependency that is passed into the file).
Finally, a new Collection class is returned.
This file is ready made for you to have your entire project optimized using Grunt.js, the Require.js Optimizer and almond.js.
Grunt.js is a JavaScript command line task runner that allows you to easily automate common development tasks such as code linting, minification, and unit testing.
Running the Jasmine Tasks with Grunt has not been implemented yet.
Almond.js a lightweight AMD shim library created by James Burke, the creator of Require.js. Almond is meant for small to medium sized projects that use one concatenated/minified JavaScript file. If you don't need some of the advanced features that Require.js provides (lazy loading, etc) then Almond.js is great for performance.
Backbone-Require-Boilerplate sets you up to use Require.js in development and Almond.js in production. By default, Backbone-Require-Boilerplate is in development mode, so if you want to try out the production build, read the production instructions below.
Production Build Instructions
Navigate to the root directory of the Backbone-Require-Boilerplate folder and type grunt and wait a few seconds for the build to complete.
Note: If you are on a Windows machine, you will have to type
grunt.cmd
Once the script has finished, you will see that both DesktopInit.min.js and MobileInit.min.js, and the mobile.min.css and desktop.min.css files will be created/updated.
Next, update the
production local variable inside of index.html to be true.
And that's it! If you have any questions just create in an issue on Github.
This file is the starting point to your Jasmine test suite and outputs the results of your Jasmine tests. It includes Require.js and points it to testInit.js for all of the proper configurations.
This file includes all of the Require.js configurations for your Jasmine unit tests. This file will look very similar to the Init.js file, but will also include Jasmine and the jasmine-jquery plugin as dependencies.
You will also notice a specs array that will allow you to add as many specs files as your application needs (Specs folders are where your unit tests are). The boilerplate only includes one specs js file by default, so only one specs item is added to the array. Finally, once the specs file is included by the
require() call, Jasmine is initialized
This file contains all of your Jasmine unit tests. Only seven tests are provided, with unit tests provided for Views, Models, Collections, and Routers (Mobile and Desktop). I'd write more, but why spoil your fun? Read through the tests and use them as examples to write your own.
The entire file is wrapped in an AMD define method, with all external module (file) dependencies listed. The Jasmine tests should be self explanatory (BDD tests are supposed to describe an app's functionality and make sense to non-techy folk as well), but if you have any questions, just file an issue and I'll respond as quickly as I can.
If you want to see Stacked and Backbone-Require-Boilerplate in action, you can head over to the projects site To watch my screen cast showing the power of yeoman and Greg's screencast demonstrating Backbone-Require-Boilerplate. I have also included quick links to the documentation for all the libraries included in Stacked.
0.1.5 - Sept 5, 2013
0.1.4 - Sept 1, 2013
0.1.3 - Aug 15, 2013
0.1.2 - Aug 14, 2013
0.1.1 - Aug 12, 2013
0.1.0 - Aug 11, 2013
|
https://www.npmjs.com/package/generator-stacked
|
CC-MAIN-2015-18
|
refinedweb
| 3,384
| 66.84
|
How Can I Use Laravel Envoy or Deployer with SemaphoreCI?
This article was peer reviewed by Wern Ancheta and Viraj Khatavkar. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be!.
We will be using SemaphoreCI for continuous delivery and Deployer to push our code to the DigitalOcean production server. If you’re not familiar with Deployer, we recommend you check out this introduction.
Demo Application
We’ll be using a 500px application that loads photos from the marketplace. It was built using Laravel and you can read the full article about its building process here, and find the repo on GitHub.
Creating a Deployer Script
The way Deployer works is by us defining servers, and then creating tasks that handle the process of deploying the application to those servers. Our
deploy.php script looks like this:
<?php require_once "recipe/common.php"; set('ssh_type', 'native'); set('default_stage', 'staging'); env('deploy_path', '/var/www'); env('composer_options', 'install --no-dev --prefer-dist --optimize-autoloader --no-progress --no-interaction');'); server('digitalocean', '174.138.78.215') ->identityFile() ->user('root') ->stage('staging'); task('deploy:upload', function() { $files = get('copy_dirs'); $releasePath = env('release_path'); foreach ($files as $file) { upload($file, "{$releasePath}/{$file}"); } }); task('deploy:staging', [ 'deploy:prepare', 'deploy:release', 'deploy:upload', 'deploy:shared', 'deploy:writable', 'deploy:symlink', 'deploy:vendors', 'current',// print current release number ])->desc('Deploy application to staging.'); after('deploy:staging', 'success');
You should read the Deployer article if you’d like to learn more about what this specific script does. Our next step is to set up a SemaphoreCI project. Please read the crash course article if you’ve never tried SemaphoreCI before, and do that.
Setting up Deployment
To configure the deployment strategy, we need to go to the project’s page and click
Set Up Deployment.
Next, we select the generic deployment option, so that SemaphoreCI gives us the freedom to add manual configuration.
After selecting automatic deployment, SemaphoreCI will give us the ability to specify deployment commands. The difference between manual and automatic, is that automatic deployment is triggered after every successful test, while manual will let us deploy any successful commit.
We can choose to include the
deployer.phar in our repo as a PHAR file or require it using Composer. Either way, the commands will be similar.
If we chose to deploy the application using SSH, SemaphoreCI gives us the ability to store our SSH private key on their servers and make it available in the deployment phase.
Note: SemaphoreCI recommends that we create a new SSH key specifically for the deployment process. In case someone stole our keys or something, we can easily revoke it. The key will also be encrypted before storing it on their end.
The key will be available under
~/.ssh/id_rsa, so the
identityFile() can be left at the default.
Push to Deploy
Now that everything is set up, we need to commit some changes to the repository to trigger the integration and deployment process.
// Edit something git add . git commit -am "Updated deploy" git push origin master
If something went wrong, we can click on the failed deploy process and see the logs to investigate the problem further.
The above screenshot is a failed commit due to the
php artisan clear-compiled command returning an error because the
mcrypt extension wasn’t enabled.
Note: Another neat trick that SemaphoreCI provides is SSHing to the build server to see what went wrong.
Other Deployment Tools
The same process we used here may be applied to any other deployment tool. Laravel Envoy, for example, might be configured like this:
@servers(['web' => 'root@ip-address']) @task('deploy', ['on' => 'web']) cd /var/www @if($new) {{-- If this is the first deployment --}} git init git remote add origin repo@github.git @endif git reset --hard git pull origin master composer update composer dumpautoload -o @if($new) chmod -R 755 storage php artisan storage:link php artisan key:generate @endif php artisan migrate --force php artisan config:clear php artisan route:clear php artisan optimize php artisan config:cache php artisan route:cache php artisan view:clear @endtask
And in the deployment command step, we would install and run Envoy:
cd /var/www composer global require "laravel/envoy=~1.0" envoy run deploy
That’s it! Envoy will now authenticate with the key we’ve added and run the update command we specified.
Conclusion
CI/CD tools are a great improvement to a developer’s workflow, and certainly help teams integrate new code into production systems. SemaphoreCI is a great choice that I recommend for its easy to use interface and its wonderful support. If you have any comments or questions, please post them below!
|
https://www.sitepoint.com/how-can-i-use-laravel-envoy-or-deployer-with-semaphoreci/?utm_source=rss
|
CC-MAIN-2020-05
|
refinedweb
| 781
| 53.92
|
.
Now we’re done with adding a nuget package it’s time to write classed that we have decided. Following a code for Person class.
public class Person { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } }
Now we are going to create a Employee class which will be inherited from the Person class and have some onw properties.
public class Employee :Person { public string Designation { get; set; } }
Same way following is a code for Customer Class.
public class Customer: Person { public string PhoneNumber { get; set; } }
Now it’s time to add a database.
Now I’ve added connection string in my app config file like following.
<connectionStrings> <add connectionString="Data Source=(LocalDB)\v11.0;AttachDbFilename=C:\data\Blog\Samples\EntityInheritance\EntityInheritance\EntityInheritance.mdf;Integrated Security=True" name="MyConnectionString" providerName="System.Data.SqlClient"/> </connectionStrings>
Now it’s time to write a code for database context file. Following is a code for that.
public class EDataContext : DbContext { public EDataContext() : base("MyConnectionString") { } public IDbSet<Customer> Customers { get; set; } public IDbSet<Employee> Employees { get; set; } }
Here you can see it’s very simple. There are two dbset properties for Employee and Customer. Now in my main I have written following code to list employee.
static void Main(string[] args) { using (var context = new EDataContext()) { var employees = context.Employees.ToList(); foreach (Employee employee in employees) { Console.WriteLine(employee.Id); } } Console.ReadLine(); }
Now let’s run this application. Right now it will do nothing as we don’t have any data inserted in it. But let’s take a look of database and see what happened there.
Here you can see that it has created two tables for both entities so this called “Table Per Type” inheritance where it created a two tables for each type Customer and Employee. In future post we will see “Table per hierarchy” inheritance where we will have a single table for Customers and Employees.
Hope you like it. Stay tuned for more!!.
Your feedback is very important to me. Please provide your feedback via putting comments.
|
http://www.dotnetjalps.com/2014/06/entity-framework-code-first-and.html
|
CC-MAIN-2017-43
|
refinedweb
| 343
| 50.12
|
Eclipse Community Forums - RDF feed Eclipse Community Forums HelloWorldSWT tutorial <![CDATA[Yes, I'm new to Java and Eclipse (an incredible tool!). I walked through the tutorial to build HelloWorld! and it worked. Then I tried walking through the tutorial for HelloWorldSWT. When I had problems, I deleted the HelloWorldSWT project and started over, I let the tutorial "do it for me" as much as possible. But it still doesn't work. I've tried the Organize Imports and can see no change. Where would I look to see the changes from this command? Here's the code for the HelloWorldSWT.java: public class HelloWorldSWT { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub Display display = new Display(); Shell shell = new Shell(display); shell.setText("Hello world!"); shell.open(); while (!shell.isDisposed()) { if (!display.readAndDispatch()) display.sleep(); } display.dispose(); } } Here's the errors after I compile: Exception in thread "main" java.lang.Error: Unresolved compilation problems: Display cannot be resolved to a type Display cannot be resolved to a type Shell cannot be resolved to a type Shell cannot be resolved to a type at HelloWorldSWT.main(HelloWorldSWT.java:9) The tutorial says: You will get compile errors. Right click in the Java editor and select Source > Organize Imports, then save your changes. I've done this. Many times. I've tried the ctrl+shift+o and I still get the compile errors. Why does this not work? I really appreciate any help you can give me. Thank you! ]]> Jason Clark 2012-09-03T01:31:44-00:00 Re: HelloWorldSWT tutorial <![CDATA[the same thing happened to me this is how I fixed it. go back to the import SWT part of the tutorial and make sure you import org.eclipse.swt.{platform}.{os}.{arch} and not the org.eclipse.swt.like mine would be org.eclipse.swt.win32.win32.x86 then continue on with the rest of the tutorial. when you click Source > Organize Imports in the java editor these two lines of code should appear import org.eclipse.swt.widgets.Display; import org.eclipse.swt.widgets.Shell; at the top automatically and the compile errors should be resolved.From there just run your application. Hope this helps !!!! ]]> Glen Zanabe 2012-09-05T15:16:50-00:00 Re: HelloWorldSWT tutorial <![CDATA[I am running 64 bit operating system windows 7 but I had to add org.eclipse.swt.win32.32x86 to make my swt application work. If I had not then the 2 import lines at the top of my project would not display when organizing imports. Do you have an idea of why that is?]]> luis villegas 2012-12-15T01:53:42-00:00 Re: HelloWorldSWT tutorial <![CDATA[Glen just find a 32 bit version of eclipse. Doesn't matter if it is indigo or MAC version if you are using MAC. This is what I have also. The only missing code is import org.eclipse.swt.widgets.Display; import org.eclipse.swt.widgets.Shell; THe is the reason we are getting the error: instances of this class is responsible for managing the connection between SWT. ]]> Andrew Geriane 2013-02-21T04:39:29-00:00 Re: HelloWorldSWT tutorial <![CDATA[hi i cant seem to find org.eclipse.swt.{platform}.{os}.{arch} i can only find this org.eclipse.swt 3.0.8. pls help ]]> Aditya Iyer 2013-04-23T12:50:56-00:00 Re: HelloWorldSWT tutorial <![CDATA[I think I have been through this. Have a look: Good luck! ]]> Baruch Youssin 2013-04-23T18:23:56-00:00 Re: HelloWorldSWT tutorial <![CDATA[Thanks, works for me.]]> Mario Ramirez Velasquez 2015-02-25T21:28:44-00:00 Re: HelloWorldSWT tutorial <![CDATA[Thanks, works for me.]]> Mario Ramirez Velasquez 2015-03-05T15:40:28-00:00
|
http://www.eclipse.org/forums/feed.php?mode=m&th=372232&basic=1
|
CC-MAIN-2015-18
|
refinedweb
| 630
| 70.29
|
Bowing to my Visual Studios request, I started my latest project using Entity Framework Core (1.0.1)
So writing my database models as I always have using the 'virtual' specifier to enable lazy loading for a List. Though when loading the parent table it appears that the child list never loads.
Parent Model
public class Events { [Key] public int EventID { get; set; } public string EventName { get; set; } public virtual List<EventInclusions> EventInclusions { get; set; } }
Child Model
public class EventInclusions { [Key] public int EventIncSubID { get; set; } public string InclusionName { get; set; } public string InclusionDesc { get; set; } public Boolean InclusionActive { get; set; } }
Adding new records to these tables seems to work as I am used to where I can nest the EventInclusions records as a List inside the Events record.
Though when I query this table
_context.Events.Where(e => e.EventName == "Test")
The Issue
EventInclusions will return a null value regardless of the data behind the scenes.
After reading a bit I am getting the feeling this is a change between EF6 which I normally use and EF Core
I could use some help in either making a blanket Lazy Loading on statement or figuring out the new format for specifying Lazy Loading.
Caz
So it appears that EF Core does not currently support lazy loading. Its coming but may be a while off.
For now if anyone else comes across this problem and is struggling. Below is a demo of using Eager loading which is what for now you have to use.
Say before you had a person object and that object contained a List of Hats in another table.
Rather than writing
var person = _context.Person.Where(p=> p.id == id).ToList(); person.Hats.Where(h=> h.id == hat).ToList();
You need to write
var person = _context.Person.Include(p=> p.Hats).Where(p=> p.id == id).ToList();
And then
person.Hats.Where(h=> h.id == hat).ToList(); will work
If you have multiple Lists - Chain the Includes
var person = _context.Person.Include(p=> p.Hats).Include(p=> p.Tickets) .Include(p=> p.Smiles).Where(p=> p.id == id).ToList();
I kinda get why this method is safer, that your not loading huge data sets that could slow things down. But I hope they get Lazy loading back soon!!!
Caz
Lazy loading is now available on
EF Core 2.1 and here is link to the relevant docs:
|
https://entityframeworkcore.com/knowledge-base/40122162/entity-framework-core---lazy-loading
|
CC-MAIN-2022-40
|
refinedweb
| 400
| 70.33
|
In response to the many requests from VB (and other languages) coders, I finally took some time to update the project and move all the functions into a DLL so that any program developed in a language capable of calling Windows standard libraries (DLLs) can use them. To demonstrate these functions, I also included C and a VB projects.
I manage a system where I need to restrict users from accessing the desktop and running other applications. The search for ways to achieve this returned several different techniques. Although in the end I didn't use any of the techniques described here, I decided to compile all the code in one application for everyone who should need it.
Note: I don't claim to be the author of any of the code presented in this article. The application is a compilation of several sources and I will try to acknowledge the authors whenever possible.
Hiding the Windows Desktop, Taskbar, Start Button,..., is generally achieved by passing the window handle returned by FindWindow() to ShowWindow().
FindWindow()
ShowWindow()
If, for example, you want to hide the Taskbar, you can use the following code:
ShowWindow(FindWindow("Shell_TrayWnd", NULL), SW_HIDE);
If you want to hide the Start Button, you have to know the button control ID first and use the same technique:
ShowWindow(GetDlgItem(FindWindow("Shell_TrayWnd", NULL),
0x130), SW_HIDE);
How do I know that the Taskbar class name is "Shell_TrayWnd" or that the Start Button id is 0x130? I used the Spy++ utility that comes with Microsoft Visual C++ 6.0. You can use this technique for any window or control you wish to hide.
Shell_TrayWnd
If you want just to disable the window, and not hide it, change the ShowWindow() call to EnableWindow().
EnableWindow()
You will see that if you hide the Desktop and the Taskbar, the Start Menu still pops up when you press the Win key or double click the desktop area. To find out how to prevent this unwanted behavior, you have to read the next section.
I call system keys all the special key combinations that the operating system (OS) use to switch between tasks or bring up the Task Manager.
There are several ways to disable these key combinations.
You can disable all these key combinations (including Ctrl+Alt+Del) by fooling the operating system into thinking the screen saver is running. This can be accomplished with the following code:
SystemParametersInfo(SPI_SETSCREENSAVERRUNNING, TRUE, &bOldState, 0);
This trick doesn't work in Windows NT or higher (Win NT+), so you need other techniques.
In Win NT+, one way to trap key switching combinations is to write a keyboard hook. You install a keyboard hook by calling SetWindowsHookEx():
SetWindowsHookEx()
hKeyboardHook = SetWindowsHookEx(WH_KEYBOARD, KeyboardProc, hInstance, 0);
The KeyboardProc() function is a callback function that is called by the OS every time a key is pressed. Inside KeyboardProc(), you decide if you want to trap the key or let the OS (or the next application in the hook chain) process it:
KeyboardProc()
KeyboardProc()
LRESULT KeyboardProc(...)
{
if (Key == VK_SOMEKEY)
return 1; // Trap key
return CallNextHookEx(...); // Let the OS handle it
}
To release the hook, you use:
UnhookWindowsHookEx(hKeyboardHook);
There are two type of hooks: local and global (or system wide) hooks. Local hooks can only trap events for your application, while global hooks can trap events for all the running applications.
To trap the switching task keys, it's necessary to write a global hook.
The Microsoft documentation states that global hook procedures should be placed in a separate DLL. The DLL is then mapped into the context of every process and can trap the events for each process -- that's why hooks are used to inject code into a remote process.
In my application, I wanted to avoid the use of an external library, so I set the global hook inside my own application (without an external library). This is accomplished by passing in the 3rd parameter of the SetWindowsHookEx() call, the instance handle of the application (and not the library as the documentation states). This technique works perfectly for Win 9x but Win NT+ is different. The same effect can be achieved by using the new keyboard and mouse low level hooks. These new hooks don't need an external library because they work differently from the other hooks. The documentation states "[...] the WH_KEYBOARD_LL hook is not injected into another process. Instead, the context switches back to the process that installed the hook and it is called in its original context. Then the context switches back to the application that generated the event.".
WH_KEYBOARD_LL
I'm not going into more details about hooks because there are many excellent articles dealing with this subject.
There's still one remaining problem: keyboard hooks cannot trap Ctrl+Alt+Del sequence! Why? Because the OS never sends this key combination to the keyboard hook chain. It is handled at a different level in the OS and is never sent to applications. So, how can we trap the Ctrl+Alt+Del key combination? Read the next section to find out.
There are several ways to disable this key combination:
To disable the Task Manager, you only have to enable the policy "Remove Task Manager", either using the Group Policy editor (gpedit.msc) or setting the registry entry. Inside my application, I used the registry functions to set the value for the following key:
HKCU\Software\Microsoft\Windows\CurrentVersion\
Policies\System\DisableTaskMgr:DWORD
Set it to 1 to disable Task Manager and 0 (or delete key) to enable it again.
You subclass a window's procedure by calling:
SetWindowLong(hWnd, GWL_WNDPROC, NewWindowProc);
The call only works for windows created by your application, i.e., you cannot subclass windows belonging to other processes (the address of NewWindowProc() is only valid for the process that called the SetWindowLong() function).
NewWindowProc()
SetWindowLong()
So, how can we subclass Winlogon SAS window?
The answer is: you have to somehow map the address of NewWindowProc() into the address space of the remote process and pass it to the SetWindowLong() call.
The technique of mapping memory into the address space of a remote process is called Injection.
Injection can be accomplished in the following ways:
HKLM\Software\Microsoft\Windows NT\
CurrentVersion\Windows\AppInit_DLLs:STRING
This method is only supported on Windows NT or higher.
CreateRemoteThread()
LoadLibrary()
WriteProcessMemory()
My preferred injection technique is the last one. It has the advantage of not needing an external DLL. To properly work this method requires very careful coding of the functions you are injecting onto the remote process. To help you to avoid common pitfalls of this technique, I inserted some tips at the beginning of the source code. For people who think this method is too dangerous, I included also the CreateRemoteThread()/LoadLibrary() method. Just #define DLL_INJECT and the application will use this method instead.
#define DLL_INJECT
After injecting the code into Winlogon's subclassing, the SAS window reduces too:
hSASWnd = FindWindow("SAS Window class", "SAS window");
SetWindowLong(hSASWnd, GWL_WNDPROC, NewSASWindowProc);
NewSASWindowProc()
WM_HOTKEY
LRESULT CALLBACK NewSASWindowProc(HWND hWnd, UINT uMsg,
WPARAM wParam, LPARAM lParam)
{
if (uMsg == WM_HOTKEY)
{
// Ctrl+Alt+Del
if (lparam == MAKELONG(MOD_CONTROL | MOD_ALT, VK_DELETE))
return 1;
}
return CallWindowProc(OldSASWindowProc, hWnd, wParam, lParam);
}
I only managed to disable Alt+Tab and Alt+Esc key combinations using this method.
In all the versions I tried, this method never worked!
With this technique, you create a new desktop and switch to it. Because the other processes (normally) run on the "Default" desktop (Winlogon runs on the "Winlogon" desktop and the screen saver runs on the "Screen-saver" desktop), this has the effect on effectively locking the Windows desktop until the process that runs in the new desktop has finished.
The following code describes the steps necessary to create and switch to a new desktop and run a thread/process in it:
// Save original desktop
hOriginalThread = GetThreadDesktop(GetCurrentThreadId());
hOriginalInput = OpenInputDesktop(0, FALSE, DESKTOP_SWITCHDESKTOP);
// Create a new Desktop and switch to it
hNewDesktop = CreateDesktop("NewDesktopName", NULL, NULL, 0, GENERIC_ALL, NULL);
SetThreadDesktop(hNewDesktop);
SwitchDesktop(hNewDesktop);
// Execute thread/process in the new desktop
StartThread();
StartProcess();
// Restore original desktop
SwitchDesktop(hOriginalInput);
SetThreadDesktop(hOriginalThread);
// Close the Desktop
CloseDesktop(hNewDesktop);
To assign a desktop to a thread, SetThreadDesktop(hNewDesktop) must be called from within the running thread. To run a process in the new desktop, the lpDesktop member of the STARTUPINFO structure passed to CreateProcess() must be set to the name of the desktop.
SetThreadDesktop(hNewDesktop)
lpDesktop
STARTUPINFO
CreateProcess()
In the introduction, I referred that at the end I didn't use none of the techniques described in this article. The strongest method of securing the Windows desktop is to change the system shell by your own shell (that is, by your own application). In Windows 9x, edit the file c:\windows\system.ini, and in the [boot] section, change the key shell=Explorer.exe by shell=MyShell.exe.
In Windows NT or higher, you can replace the shell by editing the following Registry key:
HKLM\Software\Microsoft\Windows NT\
CurrentVersion\Winlogon\Shell:STRING=Explorer.Exe
This is a global change and affects all users. To affect only certain users, edit the following Registry key:
HKLM\Software\Microsoft\WindowsNT\CurrentVersion\
Winlogon\Userinit:STRING=UserInit.Exe
Change the value of Userinit.exe by MyUserInit.exe.
Here's the code for MyUserInit:
#include <windows.h>
#include <Lmcons.h>
#define BACKDOORUSER TEXT("smith")
#define DEFAULTUSERINIT TEXT("USERINIT.EXE")
#define NEWUSERINIT TEXT("MYUSERINIT.EXE")
int main()
{
STARTUPINFO si;
PROCESS_INFORMATION pi;
TCHAR szPath[MAX_PATH+1];
TCHAR szUserName[UNLEN+1];
DWORD nSize;
// Get system directory
szPath[0] = TEXT('\0');
nSize = sizeof(szPath) / sizeof(TCHAR);
if (!GetSystemDirectory(szPath, nSize))
strcpy(szPath, "C:\\WINNT\\SYSTEM32");
strcat(szPath, "\\");
// Get user name
szUserName[0] = TEXT('\0');
nSize = sizeof(szUserName) / sizeof(TCHAR);
GetUserName(szUserName, &nSize);
// Is current user the backdoor user ?
if (!stricmp(szUserName, BACKDOORUSER))
strcat(szPath, DEFAULTUSERINIT);
else
strcat(szPath, NEWUSERINIT);
// Zero these structs
ZeroMemory(&si, sizeof(si));
si.cb = sizeof(si);
ZeroMemory(&pi, sizeof(pi));
// Start the child process
if (!CreateProcess(NULL, // No module name (use command line).
szPath, // Command line.
NULL, // Process handle not inheritable.
NULL, // Thread handle not inheritable.
FALSE, // Set handle inheritance to FALSE.
0, // No creation flags.
NULL, // Use parent's environment block.
NULL, // Use parent's starting directory.
&si, // Pointer to STARTUPINFO structure.
&pi)) // Pointer to PROCESS_INFORMATION structure.
{
return -1;
}
// Close process and thread handles
CloseHandle(pi.hProcess);
CloseHandle(pi.hThread);.
|
https://www.codeproject.com/Articles/7392/Lock-Windows-Desktop?fid=62485&df=90&mpp=25&sort=Position&spc=Relaxed&select=4033760&tid=4033760
|
CC-MAIN-2017-22
|
refinedweb
| 1,722
| 54.12
|
On 21 Jan, 20:16, George Sakkis <george.sak... at gmail.com> wrote: > On Jan 21, 1:56 pm, glomde <brk... at gmail.com> wrote: > > > > > On 21 Jan, 18:59, Wildemar Wildenburger > > > <lasses_w... at klapptsowieso.net> wrote: > > > glomde wrote: > > > > Hi, > > > > > is it somehow possible to set the current namespace so that is in an > > > > object. > > > > [snip] > > > > set namespace testObj > > > > > > > > Name would set testObj.Name to "Test". > > > > > [snip] > > > > > Is the above possible? > > > > Don't know, sorry. But let me ask you this: Why do you want to do this? > > > Maybe there is another way to solve the problem that you want to solve. > > > The reason is that I do not want to repeat myself. It is to set up XML > > type like > > trees and I would like to be able to do something like. > > > with ElemA(): > > > Description "Blahaha..." > > with ElemB(): > > > Description "Blahaha..." > > .... > > > This would be the instead of. > > with ElemA() as node: > > node. > node.Description "Blahaha..." > > with ElemB() as node: > > node. > node.Description "Blahaha..." > > .... > > > So to save typing and have something that I think looks nicer. > > ... and more confusing for anyone reading the code (including you > after a few weeks/months). If you want to save a few keystrokes, you > may use 'n' instead of 'node' or use an editor with easy auto > completion. > > By the way, is there any particular reason for generating the XML > programmatically like this ? Why not have a separate template and use > one of the dozen template engines to populate it ? > > George I am not using it for XML generation. It was only an example. But the reason for using it programmatically is that you mix power of python with templating. Using for loops and so on. XIST() is doing something similar. But I would like the namespace thing. The above was only an example. And yes it might be confusing if you read the code. But I still want to do it, the question is it possible?
|
https://mail.python.org/pipermail/python-list/2008-January/493135.html
|
CC-MAIN-2014-15
|
refinedweb
| 322
| 78.25
|
2015-02-06 01:31 AM
Hi all
I have two problems with the create volume command in WFA.
As of best practises in a SAN environment we should not mount FC volumes into the global namespace but the "Create volume" command exactly does that. Is there a way to use this command but not to mount the volume into namespace?
The other one is if we want to set the auto_delete_options in WFA the destroy_list key cannot contain multiple values. Is there a way to achieve that?
Regards and thx
Dario
Solved! SEE THE SOLUTION
2015-02-06 01:45 AM
Which mode are you using, 7-mode or Cluster mode?
In the "Create Volume" command in Cluster mode, the option has been parsed.
The same can be done for the destroy_list also.
if($AutoDeleteOptions)
{
Get-WFALogger -Info -message $("Configuring auto delete options " + $AutoDeleteOptions)
foreach ($option in $AutoDeleteOptions.split(","))
{
$option = $option.trim();
$pair = $option.split(" ");
Regards
Abhi
2015-02-06 02:11 AM
Thanks for the quick reply.
We're using the cluster mode..
I've seen the code but the problem is for the destroy_list are multiple values acceptable like this:
"state on, trigger volume, target_free_space 15, commitment destroy, destroy_list lun_clone vol_clone cifs_share file_clone sfsr"
And I always get the following error:
Got wrong autodelete option pair destroy_list lun_clone vol_clone cifs_share file_clone sfsr. Options should be in key value format.
Regards
Dario
2015-02-06 03:32 AM
The error is from here:
if($pair.Length -ne 2)
{
$msg = "Got wrong autodelete option pair " + $option + ". Options should be in key value format."
throw $msg
}
"state on, trigger volume, target_free_space 15, commitment destroy, destroy_list lun_clone vol_clone cifs_share file_clone sfsr."
Remove sfsr, it will work. That is the odd one in the comma seperated list.
Regards
Abhi
2015-02-06 03:48 AM
Dario,
@Snashot Autodelete
---
sfsr is a valid option. This code is not well written to take multiple values for option destroy_list. It assumes that like other keys this too can only take one value which is not true. This is a Bug.
@Junctionpath
---
This too is a problem as it will always set Juctionpath for the created volume irrespective of whether the user wants it or not.
A bug in the product shall be raised and it will be fixed in the future release. In the meantime if you are okay to use a non-netapp certified command, I cane give you the fix that will solve both.
sinhaa
2015-02-06 04:48 AM
Run into the same issue attached is the create volume for SAN command ( for cluster mode )
2015-02-06 05:36 AM
Thanks for the replies.
I've made my own commands. One for unmounting the volume after it has been created and the other one for settings the snapshot autodelete options.
This is because i don't wanna touch the certified commands.
Is there already a bug open for these topics?
2015-02-09 12:50 AM
Bugs have been filed for those.
Burt ID: 887550, 887533
If you want you may a file customer support case with NetApp to raise the priority for the fixes.
sinhaa
2017-03-15 06:39 AM
I can send you my self created command if you want..
|
http://community.netapp.com/t5/OnCommand-Storage-Management-Software-Discussions/WFA-Create-Volume/td-p/100287
|
CC-MAIN-2017-34
|
refinedweb
| 543
| 74.59
|
Bleu score in Python is a metric that measures the goodness of Machine Translation models. Though originally it was designed for only translation models, now it is used for other natural language processing applications as well.
The BLEU score compares a sentence against one or more reference sentences and tells how well does the candidate sentence matched the list of reference sentences. It gives an output score between 0 and 1.
A BLEU score of 1 means that the candidate sentence perfectly matches one of the reference sentences.
This score is a common metric of measurement for Image captioning models.
In this tutorial, we will be using sentence_bleu() function from the nltk library. Let’s get started.
Table of Contents
Calculating the Bleu score in Python
To calculate the Bleu score, we need to provide the reference and candidate sentences in the form of tokens.
We will learn how to do that and compute the score in this section. Let’s start with importing the necessary modules.
from nltk.translate.bleu_score import sentence_bleu
Now we can input the reference sentences in the form of a list. We also need to create tokens out of sentences before passing them to the sentence_bleu() function.
1. Input and Split the sentences
The sentences in our reference list are:
'this is a dog' 'it is dog 'dog it is' 'a dog, it is'
We can split them into tokens using the split function.
reference = [ 'this is a dog'.split(), 'it is dog'.split(), 'dog it is'.split(), 'a dog, it is'.split() ] print(reference)
Output :
[['this', 'is', 'a', 'dog'], ['it', 'is', 'dog'], ['dog', 'it', 'is'], ['a', 'dog,', 'it', 'is']]
This is what the sentences look like in the form of tokens. Now we can call the sentence_bleu() function to calculate the score.
2. Calculate the BLEU score in Python
To calculate the score use the following lines of code:
candidate = 'it is dog'.split() print('BLEU score -> {}'.format(sentence_bleu(reference, candidate)))
Output :
BLEU score -> 1.0
We get a perfect score of 1 as the candidate sentence belongs to the reference set. Let’s try another one.
candidate = 'it is a dog'.split() print('BLEU score -> {}'.format(sentence_bleu(reference, candidate)))
Output :
BLEU score -> 0.8408964152537145
We have the sentence in our reference set, but it isn’t an exact match. This is why we get a 0.84 score.
3. Complete Code for Implementing BLEU Score in Python
Here’s the complete code from this section.
from nltk.translate.bleu_score import sentence_bleu reference = [ 'this is a dog'.split(), 'it is dog'.split(), 'dog it is'.split(), 'a dog, it is'.split() ] candidate = 'it is dog'.split() print('BLEU score -> {}'.format(sentence_bleu(reference, candidate ))) candidate = 'it is a dog'.split() print('BLEU score -> {}'.format(sentence_bleu(reference, candidate)))
4. Calculating the n-gram score
While matching sentences you can choose the number of words you want the model to match at once. For example, you can choose for words to be matched one at a time (1-gram). Alternatively, you can also choose to match words in pairs (2-gram) or triplets (3-grams).
In this section we will learn how to calculate these n-gram scores.
In the sentence_bleu() function you can pass an argument with weights corresponding to the individual grams.
For example, to calculate gram scores individually you can use the following weights.
Individual 1-gram: (1, 0, 0, 0) Individual 2-gram: (0, 1, 0, 0). Individual 3-gram: (1, 0, 1, 0). Individual 4-gram: (0, 0, 0, 1).
Python code for the same is given below:
from nltk.translate.bleu_score import sentence_bleu reference = [ 'this is a dog'.split(), 'it is dog'.split(), 'dog it is'.split(), 'a dog, it is'.split() ] candidate = 'it is a dog'.split() print('Individual 1-gram: %f' % sentence_bleu(reference, candidate, weights=(1, 0, 0, 0))) print('Individual 2-gram: %f' % sentence_bleu(reference, candidate, weights=(0, 1, 0, 0))) print('Individual 3-gram: %f' % sentence_bleu(reference, candidate, weights=(0, 0, 1, 0))) print('Individual 4-gram: %f' % sentence_bleu(reference, candidate, weights=(0, 0, 0, 1)))
Output :
Individual 1-gram: 1.000000 Individual 2-gram: 1.000000 Individual 3-gram: 0.500000 Individual 4-gram: 1.000000
Be default the sentence_bleu() function calculates the cumulative 4-gram BLEU score, also called BLEU-4. The weights for BLEU-4 are as follows :
(0.25, 0.25, 0.25, 0.25)
Let’s see the BLEU-4 code:
score = sentence_bleu(reference, candidate, weights=(0.25, 0.25, 0.25, 0.25)) print(score)
Output :
0.8408964152537145
That’s the exact score we got without the n-gram weights added.
Conclusion
This tutorial was about calculating the BLEU score in Python. We learned what it is and how to calculate individual and cumulative n-gram Bleu scores. Hope you had fun learning with us!
|
https://www.journaldev.com/46659/bleu-score-in-python
|
CC-MAIN-2021-17
|
refinedweb
| 804
| 69.18
|
User talk:SK53
Contents
Railway guage
I've replied to your comment about tagging railway guages here: Talk:Railways#Gauge. Would welcome your input. Frankie Roberto 13:44, 7 August 2009 (UTC)
bulk_upload.py
Thank you for the bug report on bulk_upload.py made to the wiki, I will take a look at the patch and apply in due course. --Thomas Wood 13:49, 26 August 2009 (UTC)
NHD Import in Colorado
There's some curious data here... Take, for example, at lat=39.07193&lon=-108.59482&zoom=16 there's a "Connecticut lake" which I know hasn't existed for many, many years. It must predate some of the gravel mining done in the area, and is likely to be >30 years old. (I grew up only a few miles away from this location, and am working on updating the hometown map. Relatively new to osm) -- Riddochc 01:07, 8 April 2010 (UTC)
CodePoint fields info
Hope you don't mind I've thinned down the information you put at Ordnance Survey Opendata#Code-Point Open, and instead placed it on the data.gov.uk wiki. Maybe a better place for it I suppose. Could do some more restructuring along those lines.
Muki's blog post may be of interest to you.
-- Harry Wood 12:50, 2 July 2010 (UTC)
- Yeah, no problem. It was precisely the issue that Muki raises which is why I shoved it in the wiki. Of course I should have written an elegantly argued blog entry. The whole OS OpenData page could really do with being split with subpages for each distinct data source. I'm not sure we have a good entry on projections which I think is important, and we dont have anything in the import catalogue either. SK53 19:04, 2 July 2010 (UTC)
SK53/Haiti
Should SK53/Haiti be moved to User:SK53/Haiti? --EdLoach 14:26, 7 July 2010 (UTC)
YES!!!
Done --EdLoach 10:58, 19 July 2010 (UTC)
gluten_free=*
Please, move the page back to proposal namespace as that's the proper place for unapproved tags with low usage. Thanks --Skyper 19:01, 4 July 2013 (UTC)
Dear Skyper. I do not recognise the 'approval' process. I believe the wiki is for documenting tags as used. I do not propose to move this page back and will treat other attempts to move it as vandalism! SK53 (talk) 19:34, 4 July 2013 (UTC)
Translation
thanks for your connection
translating OSM wiki is really hard specially for RTL languages. Is there any WYSIWYG editor for translating? and how can I add Persian "Image of the Week", "News" and "meta info" templates?
- I'm not really the person to ask (see the next item), but people like Aseerel4c26 and EdLoach who are wiki administrators ought to be able to help. The only template I regularly edit is the calendar and that has a reasonable how-to in the template. SK53 (talk) 12:06, 24 March 2014 (UTC)
Reverting
Do you know that instead of this batch of reverts you could just open a old version, click edit, and save with an appropriate comment? --Aseerel4c26 (talk) 22:46, 19 March 2014 (UTC)
Take a break? ;-)
here overwritten and here wrong heading. Cheers --Aseerel4c26 (talk) 01:32, 16 April 2014 (UTC)
SPECIALTY not speciality
Hi SK53, since it caused confusion again, recently, could you please leave a comment at Talk:Proposed_features/Healthcare_2.0#Word_for_particular_areas_is_SPECIALTY_not_speciality mentioning that "SPECIALTY" (not "speciality") is the right word also in British English? ... if that *is* the case - I guess you are UK-based. Cheers --Aseerel4c26 (talk) 12:23, 10 January 2016 (UTC)
Great, thank you! --Aseerel4c26 (talk) 15:31, 10 January 2016 (UTC)
Feedlots
Hi SK53,
which tags do you propose for mapping feedlots? I'm asking you, because you've written Key:farming_system.
Cheers, Ethylisocyanat (talk) 10:21, 6 May 2017 (UTC)
Not very relevant: that was merely documenting a tag which I came across in editing using wikipedia and some talk messages. I did follow the feedlot discussion and from what I remember did not like the final decision. SK53 (talk) 18:30, 6 May 2017 (UTC)
Using "landuse=village_green" in Spain
Hi, SK53! The Spanish OSM Community is talking about using the landuse=village_green. We would like to ask you about this wiki page revision: "In Spain the tag has been used consensually to map Paseos". What is the source of this information? Thank you! --Daniel Capilla (talk) 19:50, 22 May 2017 (UTC)
|
https://wiki.openstreetmap.org/wiki/User_talk:SK53
|
CC-MAIN-2018-13
|
refinedweb
| 751
| 73.47
|
This is the mail archive of the gcc-bugs@gcc.gnu.org mailing list for the GCC project.
To recapitulate the subject: enum x { x1, x2, x3 }; struct elem_t { x a :10; x b :10; }; template <class T> bool operator!= (const T& x, const T & y) { return !(x == y); } bool f(elem_t z) { // replace a with b and watch the error disappear // Explicit cast of z.a to x also helps!!! return ( x1 != z.a); } >> >I'll be interested to hear what the C++ standard says. >> >> Me too, but if it says that enums are classes, it is a >> major bummer. > >I wouldn't be too surprised. Besides I think that the >compiler should attempt to instantiate the operator, as >you've defined it for all types - that is what >template<class T> is supposed to mean, isn't it? And it >would be bad to have behaviour defined for an enum half >of the time. You're right though - it is a bit strange >that it does one thing for "int" and another for "enum x" >types. > >Maybe you should simply define a more specific operator >for the "x" type; after all, it's only an integer, so >you don't really need to use a reference... That's not me, that's stl_relop.h. I was using bitfield enums happily until I touched the inequality operator. >OK. I just checked in the EGCS source. Your problem is >caused by a piece of code written by Jason Merrill >(jason@cygnus.com). It appears that the C++ standard >does specify that enums are subject to operator >overloading, though they aren't full class types. >The change, for your interest, was made in '94, so >any EGCS version later than that will not compile >your code. Right; and the bug went unnoticed until STL came. In any case, the fact that an explicit casting helps demonstrates that something is wrong inside EGCS. The template requires the two operands to be of the same type. The first one, x1, is of type x; if the second one, from egcs' point of view, is of the same type for all purposes, then the casting should not have helped; if it is of a different type (e.g. x:10), then an implicit cast operation must be performed. Leo
|
http://gcc.gnu.org/ml/gcc-bugs/1999-02n/msg00162.html
|
CC-MAIN-2015-40
|
refinedweb
| 385
| 73.07
|
Hi,
We have configured collocated HA topoly with replication in jboss7.2.0.
Failover and Failback works fine with data replication for the first time. However below scenario does not work.
1. Kill first
server -> failover occurs
2. Start first
server -> failback occurs
3. Now HQ backup
on 2nd server must be restarted otherwise failover can't happen again
A parameter is added to hornetq (max-saved-replicated-journal-size) to solve this issue but could not find the parameter in jboss cli/messaging xsd.
They have suggested a workaround
to restart the 2nd server on failback. However we had to restart both the servers otherwise could not get RemoteConnectionFactory for the 1st server.
Is max-saved-replicated-journal-size parameter available in any wildfly version?
We are not able to proceed with our HA and cluster configurations due to this issue. It is not feasible to restart server manually on failback in production.
Can you suggest any other solution or workaround?
Regards,
Veena
Hi,
this should be addressed by [WFLY-2346] messaging: add 1.4 XML namespace - JBoss Issue Tracker
--
tomaz
|
https://developer.jboss.org/thread/234741
|
CC-MAIN-2018-39
|
refinedweb
| 182
| 59.3
|
I raised this question in VSTS - Developers community and was advised to consult here, so posting my original question:
Is there a way to write unit tests and get code coverage for aspx code behind files? I searched few forums but did not find an answer.
And I'm not looking anything outside VSTS (like Nunit or something else), it should be integrated to VSTS and runnable after successful build.
From one reply I got in Dev group is there is no way to have automated unit testing with code coverage possible for existing sites developed using ASP .NET 2.0 having traditional code behind files. Is this true?
Hard to believe that even the topic of "How to: create an ASP .NET unit test" inside MSDN documentation talks about generating unit tests for class files inside App_code directory. What about the existing asp code behind files(aspx.cs) built on top of asp .net 2.0 and vs 2005?
I'm looking for quick response here.
Hello Nikhil,
While our test generation does not automatically generate accessors and tests for the aspx.cs files, you could still test that code inside an ASP.NET unit test. You may find the TestContext.RequestedPage very useful. E.g. this is my aspx.cs code:
public partial class _Default : System.Web.UI.Page { public string Foo() { return "Hello"; }}
And respectively this is a sample code to test it:
[TestClass()] public class Class1Test { private TestContext testContextInstance; public TestContext TestContext { get { return testContextInstance; } set { testContextInstance = value; } } [TestMethod()] [HostType("ASP.NET")] [AspNetDevelopmentServerHost("%PathToWebRoot%\\WebSite", "/WebSite")] [UrlToTest("")] public void ConstructorTest() { PrivateObject po = new PrivateObject(TestContext.RequestedPage); object res = po.Invoke("Foo"); Assert.AreEqual("Hello", res, "Different result"); } }
I hope this helps,
Microsoft is conducting an online survey to understand your opinion of the Msdn Web site. If you choose to participate, the online survey will be presented to you when you leave the Msdn Web site.
Would you like to participate?
|
https://social.msdn.microsoft.com/Forums/en-US/5c984f7f-31c9-48ef-b8e5-c9c8357431ae/using-unit-test-for-aspx-code-behind-files?forum=vststest
|
CC-MAIN-2015-32
|
refinedweb
| 325
| 50.84
|
This page explains how to install GKE On-Prem to a VMware vSphere 6.5 environment using an existing Dynamic Host Configuration Protocol (DHCP) server to assign IP addresses to cluster nodes. You can also install using static IPs.
Overview
The instructions on this page show you how to create an admin cluster and one user cluster with three nodes. Each node runs on a virtual machine (VM) in a vSphere cluster, and each node has an IP address assigned to it by a DHCP server in your environment.
After you've created the clusters, you can create additional user clusters and add or remove nodes in a user cluster.
Before you begin
Set up your on-prem environment as described in System requirements.
Complete the procedures in Preparing to install.
Create an admin workstation in vSphere.
Set a default project. Setting a default Google Cloud causes all Cloud SDK commands to run against the project, so that you don't need to specify your project for each command:
gcloud config set project [PROJECT_ID]
Replace
[PROJECT_ID]with your project ID. (You can find your project ID in Cloud Console, or by running
gcloud config get-value project.)
Using DHCP reservations for cluster nodes
In Kubernetes, it's important that node IP addresses never change. If a node IP address changes or becomes unavailable, it can break the cluster. To prevent this, consider using DHCP reservations to assign permanent addresses nodes in your admin and user clusters. Using DHCP reservations ensures that each node is assigned the same IP addresses after restart or lease renewal.
Choosing a container image registry for installation
To install, GKE On-Prem needs to know where to pull its containerized cluster components. You have two options:
Container Registry
By default, GKE On-Prem uses an existing, Google-owned container
image registry hosted by Container Registry.
Apart from setting up your proxy to allow traffic from
gcr.io, this doesn't
require additional setup.
Private Docker registry
You can choose to use a private Docker registry for installation. GKE On-Prem pushes its cluster components to that Docker registry.
Before you install, you need to configure the registry. During installation, you need to populate the GKE On-Prem configuration file with information about the registry.
Configuring a private Docker registry for installation (optional)
This section explains how to configure an existing Docker registry for
installing GKE On-Prem. To learn how to create a Docker registry, see
Run an externally-accessible registry.
After you've configured the registry, you popuate the
privateregistryconfig field of the
GKE On-Prem configuration file.
If you want to use your private Docker registry for installation, your admin workstation VM must trust the CA that signed your certificate. GKE On-Prem does not support unsecured Docker registries. When you start your Docker registry, you must provide a certificate and a key. The certificate can be signed by a public certificate authority (CA), or it can be self-signed.
To establish this trust, perform the following steps from your admin workstation VM:
Create a folder.
Now, when you run
gkectl prepare during installation, the images needed for
installation are pushed to your Docker registry.
Troubleshooting registry configuration.
Create service accounts' private keys in your admin workstation
In Preparing to install, you created four service accounts. Now, you need to create a JSON private key file for each of those service accounts. You'll provide these keys during installation.
List service accounts' email addresses
First, list the service accounts in your Google Cloud project:
gcloud iam service-accounts list
For a Google Cloud project named
my-gcp-project, this command's output
looks like this:
gcloud iam service-accounts list accounts' email address. For each of the following sections, you provide the relevant account's email account.
Access service account
gcloud iam service-accounts keys create access-key.json \ --iam-account [ACCESS_SERVICE_ACCOUNT_EMAIL]
where [ACCESS_SERVICE_ACCOUNT_EMAIL] is the access service account's email address.
Register service account
gcloud iam service-accounts keys create register-key.json \ --iam-account [REGISTER_SERVICE_ACCOUNT_EMAIL]
where [REGISTER_SERVICE_ACCOUNT_EMAIL] is the register service account's email address.
Connect service account
gcloud iam service-accounts keys create connect-key.json \ --iam-account [CONNECT_SERVICE_ACCOUNT_EMAIL]
where [CONNECT_SERVICE_ACCOUNT_EMAIL] is the connect service account's email address.
Cloud Monitoring service account
gcloud iam service-accounts keys create stackdriver-key.json \ --iam-account [STACKDRIVER_SERVICE_ACCOUNT_EMAIL]
where [STACKDRIVER_SERVICE_ACCOUNT_EMAIL] is the Cloud Monitoring service account's email address.
Generating a configuration file
To start an installation, you run
gkectl create-config to generate a
configuration file. You modify the file with your environment's specifications
and with the cluster specifications you want.
To generate the file, run the following command, where
--config [PATH] is optional and accepts a path and
name for the configuration file. Omitting
--config creates
config.yaml in the current working directory:
gkectl create-config [--config [PATH]]
Modifying the configuration file
Now that you've generated the configuration file, you need to modify it to be suitable for your environment and to meet your expectations for your clusters. The following sections explain each field, the values it expects, and where you might find the information. Some fields are commented out by default. If any of those fields are relevant to your installation, uncomment them and provide values.
bundlepath
A GKE On-Prem bundle is a set of YAML files. Collectively, the YAML files describe all of the components in a particular release of GKE On-Prem.
When you create an admin workstation, it comes with a bundle at
/var/lib/gke/bundles/gke-onprem-vsphere-[VERSION]-full.tgz. This bundle's
version matches the version of the OVA you used to create the admin workstation.
Set the value of
bundlepath to the path of your admin workstation's bundle
file. That is, set
bundlepath to:
/var/lib/gke/bundles/gke-onprem-vsphere-[VERSION]-full.tgz
where [VERSION] is the version of GKE On-Prem that you are installing. The latest version is 1.1.2-gke.0.
Note that you are free to keep your bundle file in a different location or give
it a different name. Just make sure that in your configuration file, the value
of
bundlepath is the path to your bundle file, whatever that might be.
vCenter specification
The vCenter Server specification,
vcenter, holds information about your
vCenter Server instance that GKE On-Prem needs to install to your
environment.
vcenter.credentials.address
The
vcenter.credentials.address field holds the IP address or the hostname
of your vCenter server.
Before you fill in the
vsphere.credentials.credentials.address
in your configuration file. For example:
vcenter: credentials: address: "203.0.113.1" ...
vcenter: credentials: address: "my-host.my-domain.example" ...
You must choose a value that appears in the certificate. For example, if the IP
address does not appear in the certificate, you cannot use it for
vcenter.credentials.address.
vcenter.credentials
GKE On-Prem needs to know your vCenter Server's username, and
password. To provide this information, set the
username and
password values
under
vcenter.credentials. For example:
vcenter: credentials: ... username: "my-name" password: "my-password"
vcenter.datacenter,
.datastore,
.cluster,
.network
GKE On-Prem needs some information about the structure of your
vSphere environment. Set the values under
vcenter to provide this information.
For example:
vcenter: ... datacenter: "MY-DATACENTER" datastore: "MY-DATASTORE" cluster: "MY-VSPHERE-CLUSTER" network: "MY-VIRTUAL-NETWORK"
vcenter.resourcepool
A vSphere resource pool
is a logical grouping of vSphere VMs in your vSphere cluster. If you are using
a resource pool other than the default, provide its name to
vcenter.resourcepool. For example:
vcenter: ... resourcepool: "my-pool"
If you want
GKE On-Prem to deploy its nodes to the vSphere cluster's default
resource pool, provide an empty string to
vcenter.resourcepool. For example:
vcenter: ... resourcepool: ""
vcenter.datadisk
GKE On-Prem creates a virtual machine disk (VMDK) to hold the
Kubernetes object data for the admin cluster. The installer creates the VMDK for
you, but you must provide a name for the VMDK in the
vcenter.datadisk field.
For example:
vcenter: ... datadisk: "my-disk.vmdk"
- vSAN datastore: Creating a folder for the VMDK
If you are using a vSAN datastore, you need to put the VMDK in a folder. You must manually create the folder ahead of time. To do so, you could use
govcto create a folder:
govc datastore.mkdir
The
proxy field holds information of user-provided registry.
Admin cluster specification
The admin cluster pecification,
admincluster, holds information that
GKE On-Prem needs to create the admin cluster.
admincluster.vcenter.network
In
admincluster.vcenter.network, you can specify a vCenter network
for your admin cluster nodes. Note that this overrides the global setting you
provided in
vcenter. For example:
admincluster: vcenter: network: MY-ADMIN-CLUSTER-NETWORK
admincluster.ipblockfilepath
This field is used if you are using static IPs. Since you are using a DHCP
server to allocate IP addresses, leave the
admincluster.ipblockfilepath field
commented out.
admincluster.bigip.credentials (integrated load balancing mode)
If you are using integrated load balancing mode, GKE On-Prem needs to
know the IP address or hostname, username, and password of your F5 BIG-IP load balancer. Set
the values under
admincluster.bigip to provide this information. For example:
admincluster: ... bigip: credentials: address: "203.0.113.2" username: "my-admin-f5-name" password: "rJDlm^%7aOzw"
admincluster.bigip.credentials (integrated load balancing mode)
If you are using integrated load balancing mode, you must
create a BIG-IP partition
for your admin cluster. Set
admincluster.bigip.partition to the name of your
partition. For example:
admincluster: ... bigip: partition: "my-admin-f5-partition"
admincluster.vips
Set the value of
admincluster.vips.controlplanevip to the
IP address that you have chosen to configure on the load balancer
for the Kubernetes API server of the admin cluster. Set the value of
ingressvip to the IP address you have chosen to configure on the load balancer
for the admin cluster's ingress controller. For example:
admincluster: ... vips: controlplanevip: 203.0.113.3 ingressvip: 203.0.113.4
admincluster.serviceiprange and
admincluster.podiprange
The admin cluster must have a
range of IP addresses
to use for Services and a range of IP addresses to use for Pods. These ranges
are specified by the
admincluster.serviceiprange and
admin:
admincluster: ... serviceiprange: 10.96.232.0/24 podiprange: 192.168.0.0/16
User cluster specification
The user cluster specification,
usercluster, holds information that
GKE On-Prem needs to create the initial user cluster.
Disabling VMware DRS anti-affinity rules (optional).
This feature requires that your vSphere environment meets the following conditions:
- VMware DRS is enabled. VMware DRS requires vSphere Enterprise Plus license edition. To learn how to enable DRS, see Creating a DRS Cluster.
- The vSphere user account provided in the
vcenterfield has the
Host.Inventory.EditClusterpermission.
- There are at least three physical hosts available.
If you do not have DRS enabled, or if you do not have at least three hosts to
which vSphere VMs can be scheduled, add
usercluster.antiaffinitygroups.enabled: false to your configuration file.
For example:
usercluster: ... antiaffinitygroups: enabled: false
- For clusters running more than three nodes
- If vSphere vMotion moves a node to a different host, the node's workloads will need to be restarted before they are distributed across hosts again.
usercluster.vcenter.network
In
usercluster.vcenter.network, you can specify a vCenter network
for your user cluster nodes. Note that this overrides the global setting you
provided in
vcenter. For example:
usercluster: vcenter: network: MY-USER-CLUSTER-NETWORK
usercluster.ipblockfilepath
This field is used if you are using static IPs. Since you are using a DHCP
server to allocate IP addresses, leave the
usercluster.ipblockfilepath field
commented out.
usercluster.bigip.credentials (integrated load balancing mode)
If you are using integrated load balancing mode, GKE On-Prem needs to
know the IP address or hostname, username, and password of the F5 BIG-IP load
balancer that you intend to use for the user cluster. Set the values under
usercluster.bigip to provide this information. For example:
usercluster: ... bigip: credentials: address: "203.0.113.5" username: "my-user-f5-name" password: "8%jfQATKO$#z" ...
usercluster.bigip.partition (integrated load balancing mode)
You must
create a BIG-IP partition
for your user cluster. Set
usercluster.bigip.partition to the name of your
partition. For example:
usercluster: ... bigip: partition: "my-user-f5-partition" ...
usercluster.vips
Set the value of
usercluster.vips.controlplanevip to the
IP address that you have chosen to configure on the load balancer
for the Kubernetes API server of the user cluster. Set the value of
ingressvip to the IP address you have chosen to configure on the load balancer
for the user cluster's ingress controller. For example:
usercluster: ... vips: controlplanevip: 203.0.113.6 ingressvip: 203.0.113.7
usercluster.serviceiprange and
usercluster.podiprange
The user cluster must have a
range of IP addresses
to use for Services and a range of IP addresses to use for Pods. These ranges
are specified by the
usercluster.serviceiprange and
user:
usercluster: ... serviceiprange: 10.96.233.0/24 podiprange: 172.16.0.0/12
usercluster.clustername
Set the value of
usercluster.clustername to a name of your choice. Choose a
name that is no longer than 40 characters. For example:
usercluster: ... clustername: "my-user-cluster-1"
usercluster.masternode.replicas
The
usercluster.masternode.replicas field specifies how many control plane nodes you
want the user cluster to have. A user cluster's control plane node runs the user
control plane, the Kubernetes control plane components. This value must be
1
or
3:
- Set this field to
1to run one user control plane.
- Set this field to
3if you want to have a high availability (HA) user control plane composed of three control plane nodes that each run a user control plane.
usercluster.masternode.cpus and
usercluster.masternode.memorymb
The
usercluster.masternode.cpus and
usercluster.masternode.memorymb fields
specify how many CPUs and how much memory, in megabytes, is allocated to each
control plane node of the user cluster. For example:
usercluster: ... masternode: cpus: 4 memorymb: 8192
usercluster.workernode.replicas
The
usercluster.workernode.replicas field specifies how many worker nodes you
want the user cluster to have. The worker nodes run the cluster workloads.
usercluster.workernode.cpus and
usercluster.workernode.memorymb
The
usercluster.masternode.cpus and
usercluster.masternode.memorymb fields
specify how many CPUs and how much memory, in megabytes, is allocated to each
worker node of the user cluster. For example:
usercluster: ... workernode: cpus: 4 memorymb: 8192 replicas: 3
usercluster.oidc
If you intend for clients of the user cluster to use OIDC authentication, set
values for the fields under
usercluster.oidc. Configuring OIDC is optional.
To learn how to configure OIDC, see Authenticating with OIDC.
- About installing version 1.0.2-gke.3
In version 1.0.2-gke.3, if you want to use OIDC, the
clientsecretfield is required even if you don't want to log in to a cluster from Cloud Console. In that case, you can provide a placeholder value for
clientsecret:
oidc: clientsecret: "secret"
usercluster.sni
Server Name Indication (SNI), an extension to Transport Layer Security (TLS), allows servers to present multiple certificates on a single IP address and TCP port, depending on the client-indicated hostname.
If your CA is already distributed as a trusted CA to clients outside your user cluster and you want to rely on this chain to identify trusted clusters, you can configure the Kubernetes API server with an additional certificate that is presented to external clients of the load balancer IP address.
To use SNI with your user clusters, you need to have your own CA and Public Key Infrastructure (PKI). You provision a separate serving certificate for each user cluster, and GKE On-Prem adds each additional serving certificate to its respective user cluster.
To configure SNI for the Kubernetes API server of the user cluster, provide
values for
usercluster.sni.certpath (path to the external certificate) and
usercluster.sni.keypath (path to the external certificate's private key file).
For example:
usercluster: ... sni: certpath: "/my-cert-folder/example.com.crt" keypath: "/my-cert-folder/example.com.key"
lbmode
You can use integrated load balancing with DHCP. Integrated load balancing mode applies to your admin cluster and your initial user cluster. It also applies to any additional user clusters that you create in the future. Integrates load balancing mode supports using F5 BIG-IP as your load balancer.
Set the value of
lbmode to
Integrated. For example:
lbmode: Integrated
gkeconnect
The
gkeconnect specification holds information that GKE On-Prem
needs to set up management of your on-prem clusters from Google Cloud Console.
Set
gkeconnect.projectid to the project ID of the Google Cloud project
where you want to manage your on-prem clusters.
Set the value of
gkeconnect.registerserviceaccountkeypath to the path of the
JSON key file for your
register service account.
Set the value of
gkeconnect.agentserviceaccountkeypath to the path of the
JSON key file for your
connect service account.:
stackdriver
The
stackdriver specification holds information that GKE On-Prem
needs to store log entries generated by your on-prem clusters.
Set
stackdriver.projectid to the project ID of the Google Cloud project
where you want to view Stackdriver logs that pertain to your on-prem clusters.
Set
stackdriver.clusterlocation to a Google Cloud region where you want
to store Stackdriver logs. It is a good idea to choose a region that is near
your on-prem data center.
Set
stackdriver.enablevpc to
true if you have your cluster's network
controlled by a VPC. This ensures that all
telemetry flows through Google's restricted IP addresses.
Set
stackdriver.serviceaccountkeypath to the path of the JSON key file for
your
Stackdriver Logging service account.
For example:
stackdriver: projectid: "my-project" clusterlocation: "us-west1" enablevpc: false serviceaccountkeypath: "/my-key-folder/stackdriver-key.json"
privateregistryconfig
If you have a
private Docker registry,
the
privateregistryconfig field holds information that GKE On-Prem
uses to push images to your private registry. If you don't specify a private
registry,
gkectl pulls GKE On-Prem's container images from its
Container Registry repository,
gcr.io/gke-on-prem-release, during installation.
Under
privatedockerregistry.credentials, set
address to the IP address of
the machine that runs your private Docker registry. Set
username and
password to the username and password of your private Docker registry.
When Docker pulls an image from your private registry, the registry must prove its identity by presenting a certificate. The registry's certificate is signed by a certificate authority (CA). Docker uses the CA's certificate to validate the registry's certificate.
Set
privateregistryconfig.cacertpath to the path of the CA's certificate. For
example:
privateregistryconfig ... cacertpath: /my-cert-folder/registry-ca.crt
gcrkeypath
Set the value of
gcrkeypath to the path of the JSON key file for your
access service account.
For example:
gcrkeypath: "/my-key-folder/access-key.json"
cloudauditlogging
If you want to send your Kubernetes audit logs to your Google Cloud
project, populate the
cloudauditlogging specification. For example:
cloudauditlogging: projectid: "my-project" # A GCP region where you would like to store audit logs for this cluster. clusterlocation: "us-west1" # The absolute or relative path to the key file for a GCP service account used to # send audit logs from the cluster serviceaccountkeypath: "/my-key-folder/audit-logging-key.json"
Learn more about using audit logging.
Validating the configuration file
After you've modified the configuration file, run
gkectl check-config to
verify that the file is valid and can be used for installation:
gkectl check-config --config [PATH_TO_CONFIG]
If the command returns any
FAILURE messages, fix the issues and validate the
file again.
Skipping validations
The following
gkectl commands automatically run validations against your
config file:
gkectl prepare
gkectl create cluster
gkectl upgrade
To skip a command's validations, pass in
--skip-validation-all. For example,
to skip all validations for
gkectl prepare:
gkectl prepare --config [PATH_TO_CONFIG] --skip-validation-all
To see all available flags for skipping specific validations:
gkectl check-config --help
Running
gkectl prepare
Before you install, you need to run
gkectl prepare on your admin workstation
to initialize your vSphere environment. The
gkectl prepare performs the
following tasks:
Import the node OS image to vSphere and mark it as a template.
Optionally, validate the container images' build attestations, thereby verifying the images were built and signed by Google and are ready for deployment.
Run
gkectl prepare with the GKE On-Prem configuration file, where
--validate-attestations is optional:
gkectl prepare --config [CONFIG_FILE] --validate-attestations
Positive output from
--validate-attestations is
Image [IMAGE_NAME] validated.
Installing GKE On-Prem
You've created a configuration file that specifies how your environment looks
and how you'd like your clusters to look, and you've validated the file. You ran
gkectl prepare to initialize your environment with the GKE On-Prem
software. Now you're ready to initiate a fresh installation of
GKE On-Prem.
To install GKE On-Prem,.
Connecting clusters to Google
When you populate the
gkeconnectspecification, your user cluster is automatically registered with Cloud Console. You can view a registered GKE On-Prem cluster in Cloud Console's Kubernetes clusters menu. From there, you can sign into the cluster to view its workloads.
If you don't see your cluster in Cloud Console within one hour of creating it, refer to Connect troubleshooting.
Enabling ingress
After your user cluster is running, you must enable ingress by creating a Gateway object. The first part of the Gateway manifest is always this:
apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: istio-autogenerated-k8s-ingress namespace: gke-system spec: selector: istio: ingress-gke-system
You can tailor the rest of the manifest according to your needs. For example, this manifest says that clients can send requests on port 80 using the HTTP/2 protocol and any hostname:: - "*"
If you want to accept HTTPS requests, then you must provide one or more certificates that your ingress controller can present to clients.
To provide a certificate:
- Create a Secret that holds your certificate and key.
- Create a Gateway object, or modify an existing Gateway object, that refers to your Secret. The name of the Gateway object must be
istio-autogenerated-k8s-ingress.
For example, suppose you have already created a certificate file,
ingress-wildcard.crt, and a key file
ingress-wildcard.key.
Create a Secret named
ingressgateway-wildcard-certs:
kubectl create secret tls \ --namespace gke-system \ ingressgateway-wildcard-certs \ --cert ./ingress-wildcard.crt \ --key ./ingress-wildcard.key
Here's a manifest for a Gateway that refers to your Secret. Clients can call on port 443 using the HTTPS protocol and any hostname that matches *.example.com. Note that the hostname in the certificate must match the hostname in the manifest, *.example.com in this example:: - "*" - hosts: - "*.example.com" port: name: https-demo-wildcard number: 443 protocol: HTTPS tls: mode: SIMPLE credentialName: ingressgateway-wildcard-certs
You can create multiple TLS certs for different hosts by modifying your Gateway manifest.
Save your manifest to a file named
my-gateway.yaml, and create the Gateway:
kubectl apply -f my-gateway.yaml
Now you can use Kubernetes Ingress objects in the standard way.
Limitations]
What's next
- Learn how to create additional user clusters.
- View your clusters in Cloud Console.
- Log in to your clusters.
|
https://cloud.google.com/anthos/clusters/docs/on-prem/1.1/how-to/installation/install-dhcp
|
CC-MAIN-2021-10
|
refinedweb
| 3,876
| 50.23
|
Beginner
Beginner how to call a gui class from applet class
Hello Friend,
Please visit the following link:
Thanks
beginner
beginner provide a simple java exception program that uses try , catch, throw and finally blocks
java beginner
java beginner Hi
using hashcode() and equal() method and hashmap object how to compare employee lastname and firstname ,display in console
I am swetha iam confused when to use service(), doget(), dopost() method in servlets
Any request from client are handled initially by the service() method before delegating to the doXxx() methods
java bEGINNER
java bEGINNER WHAT IS THE BEST WAYS,BOOKS AND pRACTICAL IMPLEMENTATION TECHNIQUES FOR A BEGINNER
Nag(Beginner)
Nag(Beginner) sir i am new to java.. i want to know what topics comes under core java, adv. java and also the remaining topics... plz tell me
Java Tutorials
beginner
beginner provide me a simple html program for an usual registeration fom, without function.
<html>
<form name="form" method="post" >
<table>
<tr><td>Enter Name</td><td><
Java beginner - Java Beginners
Java beginner what is the diffrence between abstraction and interface? with coding
jAVA BEGINNER PROBLEMS
jAVA BEGINNER PROBLEMS I need the program that takes from standard input an expression without left parenthesis and prints the equivalent infix expression with the parenthesis inserted???
Like EXAMPLE:
1+2)3-4)5-6
Generating Random Numbers to Fill array. Java Beginner needing help!
Generating Random Numbers to Fill array. Java Beginner needing help! Hello all!
I am new to this site, and Java programming.
My problem is: Write... in your main method by calls to the generatePermutation() and displayPermutedArray
java method
java method can we declare a method in java like this
static {
System.loadLibrary("nativelib");
}
i have seen this in a java learning E book.
i don't understand the
static
{
}
plz help me. what kind of method
Java guide
Java guide
Any beginner requires guidance to start with. A beginner in the field of
java... practical guide to the beginner and experienced java
programmer. These tutorials have
java method - Java Beginners
java method Plz help me on toString() in java Hi Friend,
The Object class provides the method toString() that returns the string..., the toString() method of the object is automatically called.
Thanks
java method - Java Beginners
java method hi friends, Is there any default return type for methods in java? There is no default return type in java, as a user you have to specify the return type even void
Beginner
beginner
java method - Java Beginners
java method i wanna explation about a method
for example... Mail[] getAllMails(String userName)
I WANNA EXPLATION ABOUT ABOVE METHOD CAN U... and Tutorials on Java Mail visit to :
Thanks
pass method reference in java
pass method reference in java How to pass method reference in Java
class method in java
class method in java How to write a class method in Java?
You must read Java - Class, Object and Methods in Java
try catch method in java
try catch method in java try catch method in java - when and how should i use the try and catch method in Java ?
Please visit the following links:
http
Using throw in java method
Using throw in java method using throw with method implies what
Java making a method deprecated
Java making a method deprecated java making a method deprecated
In Java how to declare a method depricated? Is it correct to use
depricated... or method in Java. Instead of it
you can use alternatives that exists.
Syntax
Java Tutorial
Java Tutorials
If you are a beginner and looking for the Java tutorials to learn java
programming language from scratch then this the best place to start with the
Java programming language.
Java programming language contains huge
Destroy method in java - Java Beginners
Destroy method in java Hi,
What is the implementation of destroy method in java.. is it native or java code?
Thanks
Hi Friend,
This method is not implemented.
Thanks
method inside the method??
method inside the method?? can't we declare a method inside a method in java??
for eg:
public class One
{
public static void main(String[] args)
{
One obj=new One();
One.add();
private static void add
Method decleration in java
Method decleration in java What is the catch or declare rule for method declarations
Static method in java - Java Beginners
Static method in java What are static method in Java Hi Friend,
Please visit the following link:
Hope that it will be helpful for you
getch() method in java
getch() method in java does someone know how can i obtain the getch() function in java?
Hi Friend,
In java, you can use the method next() of Scanner class.
Here is an example:
import java.util.*;
class Hello
Java Method Overloading - Java Beginners
Java Method Overloading can method be overloaded in subclass or not? Hi Friend,
Yes A subclass can overload the methods.For Example...:
Thanks
Java static method
Java static method Can we override static methods
Java synchronized method
Java synchronized method What are synchronized methods and synchronized statements
Method
of method in java object-oriented technique first one is the Instance method... by method's name and the parameter types. Java has a powerful feature
which...
Method
Finalize method in Java - Java Beginners
Finalize method in Java Hi,
What is the use of Finalize method in Java? Give me few example of Finalize method.
Thanks
Hi Friend,
Java uses finalize() method for Garbage collection.
Example:
import
method overriding in java
method overriding in java program to compute area of square,rectangle,triangle,circle and volume of sphere,cylinder and perimeter of a cube using method overriding
How to Java Program
How to Java Program
If you are beginner in
java , want to learn and make career in the Java... for the respective operating system.
Java Program for Beginner
Our first
method overloading - Java Beginners
method overloading Write a program to demonstrate the method...
} Hi Friend,
In Java, the methods having the same name within... is referred to as Method Overloading. In the given example, we have defined
Method Overloading
Method Overloading In java can method be overloaded in different class
Java Method with arrays
Java Method with arrays My assignment is to write a java method..., write a program to test your method.
my main method code is :
public class...];
currentSmallestIndex = i;
}
}
return currentSmallestIndex;
}
/.... Suppose, for instance, that you want to write a method to return what
java program on recursive method
java program on recursive method in how many ways can you make change for one dollar(100 cents) using pennies(1-cent coins), nickels(5 cents), dimes(10 cents),and quarter(25 cents)? the coins must add up to the exact total
calling method - Java Beginners
calling method class A{
public void fo(){
class B{
public void fo1(){
System.out.println("fo1 of class B... static void main(String args[ ]){
}
}
I Want to call method fo1
clone method - Java Beginners
clone method I want to copy a reference which the class name is B and the variable name is A. I used clone method.After I have the class name C which is same with b class.
C=(A) B.cloneObject(); I make a clone method
main method
main method Why is the java main method static
java Method Error - Java Beginners
java Method Error class mathoperation
{
static int add(int...
mathdemo.java:7: missing method body, or declare abstract
static int mul(int x,int y);
^
mathdemo.java:9: return outside method
boolean method - Java Beginners
don't know how to throw in the boolean method in my main class.
Notebook...;
}
Below is main class method:
String itemNo = JOptionPane.showInputDialog
When is java main method called?
When is java main method called? When is java main method called? Please explain the main method in Java with the help of code.
In a java class, main(..) method is the first method called by java environment when
Static method
Static method what is a static method?
Have a look at the following link:
Java Static Method
what is ment by method signature in java.......
what is ment by method signature in java....... what is ment by method signature in java.......
Hi Friend,
The signature of a method is the combination of the method's name along with the number and types
Static Method in java with realtime Example
Static Method in java with realtime Example could you please make me clear with Static Method in java with real-time Example
createCustomer method in Java - Java Beginners
createCustomer method in Java I'm just trying to add a createCustomers() to an existing code. I have all of my GUIs and it compiles, but I'm kind of stuck here. My code is as follows:
import java.awt.*;
import
Clone method example in Java
Clone method example in Java
Clone method example in Java programming
language
Given example of java clone() method illustrates, how to
use clone() method. The Cloneable
Java Example Update Method - Java Beginners
Java Example Update Method I wants simple java example for overriding update method in applet .
please give me that example
Method in Java
Method in Java
In this section, we will explore the concept of method
in the reference of object... the type of value returned from
a method. It can be a valid Java type
Java method Overriding
Java method Overriding
Below example illustrates method Overriding in java. Method overriding in java means a subclass method overriding a super class method. Superclass
Jave writing method - Java Beginners
Jave writing method HI! i'll really apprecite if anyone can help me...! Implement a method named posNeg that accepts 3 integer parameters..., 19 then the method will return 1. Implement a method named order
Use of isReadOnly() method in java.
Use of isReadOnly() method in java.
In this tutorial, we will see how to use of isReadOnly method in java.
IntBuffer API : The java.nio.IntBuffer class... type
Method
Description
static IntBuffer
allocate draw triangle draw method?
Java draw triangle draw method? hi
how would i construct the draw method for an triangle using the 'public void draw (graphics g ) method? im...; Here is an example that draws a triangle using drawPolygon method.
import
Use of rewind method in java.
Use of rewind method in java.
In this tutorial, we will explain the use of
rewind() method of Buffer class. The ByteBuffer ...;In this example, The rewind method reset the position of buffer at zero.
So you can
Example of contains method of hashset in java.
Example of contains method of hashset in java.
The contains() is a method of hashset. It is used for checking that the
given number is available..., it returns true otherwise false.
Code:
HashSetToArray .java
Java hashmap clear() method example.
Java hashmap clear() method example.
This tutorial is based on clear() method of java HashMap class.
It removes all values from HashMap.
Code: ...;EE");
/*Display Element before
clear method */
System.out.println
sleep method in thread java program
sleep method in thread java program How can we use sleep method... example ,we have used sleep method. we are passing some interval to the sleep method .After that interval thread will awake.
public class test
Method with Varags
Method with Varags How to use Vargas method in java?
The given example illustrates the use of Vargas. Vargas represents variable length arguments in methods, it is having (Object?arguments) form In the given
Input / Output in Easiest Method in Java
Input / Output in Easiest Method in Java How to input from keyboard, both strings and characters and display the result on the screen with the easiest method in Java?
Hi Friend,
Try the following code:
import
Java Thread : run() method
Java Thread : run() method
In this section we are going to describe run() method with example in java thread.
Thread run() :
All the execution code is written with in the run() method. This method is
part of the Runnable interface
Java: Method Exercises 2
Java: Method Exercises 2
Name: _________________________________
What... September
// Remember:
// * If a method is overloaded (more than one method....
// * Actual parameters (in the call) are evaluated before the method
Java: Method Exercises 4
Java: Method Exercises 4
Name: _________________________________
What is the output from this program?
___________________________
___________________________
___________________________
___________________________
___________________________
1
Java clone method example
Java clone method example
Java clone method is used to present the duplication of object in the Java
programming language. The Java objects are manipulated
dynamic method dispatch - Java Beginners
dynamic method dispatch can you give a good example for dynamic method dispatch (run time polymorphism)
Hi Friend,
In dynamic method dispatch,super class refers to subclass object and implements method overriding
Java: Method Exercises 1
Java NotesMethod Exercises 1
Name: _________________________________
What is the output from this program.... ", computeSomething(100, 100));
}
/** Displays an integer by calling another method
Java: Method Exercises 3
Java: Method Exercises 3
Name: _________________________________
What is the output from this program?
___________________________
___________________________
___________________________
___________________________
___________________________
1
Java: Method Exercises 5
Java NotesMethod Exercises 5
Name: _________________________________
What is the output from this program... by calling another method, */
private static String convertInt(int i
Servlet service method - Java Beginners
and the other one is normal java class ,What i want to do is that i want to send path... java class) i want to get this path, i wrote the code for stand alone...;
return path1;
}
}
here is my normal java class
public class Two {
public
java method return type : - Java Beginners
java method return type : i have one question regarding methods,,, if we create a method with return type as class name (public employee addemp(int...
{
public static void classasmethod()
{
System.out.println("classasmethod method
Java Thread : toString() method
Java Thread : toString() method
In this section we are going to describe toString() method with example in java thread.
toString() method :
If you want... and thread group.
Example :In this example we are using toString()
method
Ask Questions?
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions.
|
http://www.roseindia.net/tutorialhelp/comment/87572
|
CC-MAIN-2013-20
|
refinedweb
| 2,356
| 63.09
|
[
]
Noble Paul commented on SOLR-1335:
----------------------------------
bq.Noble - why aren't system properties viable for this?
* Setting system properties is error prone. If we have a few dozen properties setting -D for
each property is hard. The startup scripts are maintained by operations whereas this properties
file should be delivered by the developers. This is more about a separation of concern
* System properties are global properties. we should not corrupt that namespace
> load core properties from a properties file
> -------------------------------------------
>
> Key: SOLR-1335
> URL:
> Project: Solr
> Issue Type: New Feature
> Reporter: Noble Paul
> Assignee: Noble Paul
> Fix For: 1.4
>
> Attachments: SOLR-1335.patch, SOLR-1335.patch,.
|
http://mail-archives.apache.org/mod_mbox/lucene-solr-dev/200908.mbox/%3C1294279058.1250682135066.JavaMail.jira@brutus%3E
|
CC-MAIN-2019-35
|
refinedweb
| 106
| 59.19
|
Next: Naming Rules, Previous: Default Namespace, Up: Namespaces [Contents][Index]
In order to set the current namespace, use an
@namespace directive
at the top level of your program:
@namespace "passwd" BEGIN { … } …
After this directive, all simple non-completely-uppercase identifiers are
placed into the
passwd namespace.
You can change the namespace multiple times within a single source file, although this is likely to become confusing if you do it too much.
NOTE: Association of unqualified identifiers to a namespace is handled while
gawkparses your program, before it starts to run. There is no concept of a “current” namespace once your program starts executing. Be sure you understand this.
Each source file for -i and -f starts out with an implicit ‘@namespace "awk"’. Similarly, each chunk of command-line code supplied with -e has such an implicit initial statement (see section Command-Line Options).
Files included with
@include (see section Including Other Files into Your Program) “push”
and “pop” the current namespace. That is, each
@include saves
the current namespace and starts over with an implicit ‘@namespace
"awk"’ which remains in effect until an explicit
@namespace
directive is seen. When
gawk finishes processing the included
file, the saved namespace is restored and processing continues where it
left off in the original file.
The use of
@namespace has no influence upon the order of execution
of
BEGIN,
BEGINFILE,
END, and
ENDFILE rules.
|
https://www.gnu.org/software/gawk/manual/html_node/Changing-The-Namespace.html
|
CC-MAIN-2019-26
|
refinedweb
| 230
| 50.67
|
Holy cow, I wrote a book!.
A customer was having trouble with their shell namespace
extension:
When we click the [+] button next to our shell namespace extension
in the folder tree view,
the tree view shows both files and folders,
even though it's supposed to show only folders.
Our IShellFolder::GetAttributesOf
does return the
correct values for
SFGAO_FOLDER (including it
for the folders and omitting it for the non-folders).
What are we doing wrong?
IShellFolder::GetAttributesOf
SFGAO_FOLDER
The tree view enumerates the children of a folder by
calling IShellFolder::EnumObjects
and passing the SHCONTF_FOLDERS flag
while omitting the SHCONTF_NONFOLDERS flag.
This means that it is only interested in enumerating
child folders.
Child non-folders should be excluded from the enumeration.
IShellFolder::EnumObjects
SHCONTF_FOLDERS
SHCONTF_NONFOLDERS
It so happens that the customer's shell namespace extension
was not respecting the SHCONTF_FOLDERS and
SHCONTF_NONFOLDERS flags;
it always enumerated all objects regardless of what the
caller requested.
Fixing the enumerator fixed the problem.
bhiggins
asks about the mysterious function EnumClaw
that existed in some versions of the Win32 documentation.
EnumClaw
I went digging through the MSDN archives and was close to giving up
and declaring the cause lost,
but then I found it: A copy
of the EnumClaw documentation.
EnumClaw
The EnumClaw function returns the child or the parent
of the window whose HWND is passed in.
HWND EnumClaw(
HWND hwndParent // handle to parent window
);
Parameters
hwndParent
[in] Handle to the parent window.
Return Values.
Requirements
Windows NT/2000/XP:
Included in Windows XP and Windows .NET Server.
Windows 95/98/Me: Unsupported.
Header: Declared in Winuser.h; include Windows.h.
Library: Use User32.lib.
See Also
Windows Overview,
Window Functions.
The EnumClaw function returns the child or the parent
of the window whose HWND is passed in.
HWND EnumClaw(
HWND hwndParent // handle to parent window
);.
Windows NT/2000/XP:
Included in Windows XP and Windows .NET Server.
Windows 95/98/Me: Unsupported.
Header: Declared in Winuser.h; include Windows.h.
Library: Use User32.lib.
Windows Overview,
Window Functions.
There was never a function called EnumClaw.
This was a joke inserted by the documentation folks,
a pun on the Washington city named
Enumclaw.
(The state of Washington has a lot of place names which come
from Native American words.
Other examples are
Sequim,
Puyallup,
and
Tulalip.
At least Enumclaw is pronounced almost like it's spelled.)
|
http://blogs.msdn.com/b/oldnewthing/archive/2010/04.aspx?PageIndex=3
|
CC-MAIN-2013-20
|
refinedweb
| 394
| 51.24
|
Jasmine utilities and helpers for testing React components.
Turn a jasmine Suite into one suitable for testing React components.
Example:
var Checkbox = ;;
The following features will become available to your suite:
In your global suite context, "subject" will contain a reference to the mounted instance of the component you're testing, created and mounted during each test in your suite.
All the DOM helpers detailed below are injected into the global context throughout your tests, so you can conveniently use things like
find() and
click().
Note
Most of these helpers operate on nodes inside the node of the component being tested, so when you do
find('.foo')it will look for a child that has the
fooclass only inside the mounted component instance.
Locate a single element using a jQuery-compatible selector. Returns an
HTMLElement if found.
;
Like
find() but returns a set of all elements that match the selector.
Simulate a mouse-click. If a selector was passed in and it did not yield an element, an error will be thrown.
;
Toggle the
checked state of a radio button or a checkbox. You can explicitly set the state by passing
true or
false as the second parameter.
;
Choose an option from a
<select /> dropdown menu. The value parameter should map to the
<option /> element you want to choose.
There is currently no support for multiple-option dropdowns.
;
Change a text-input field's value. This does not simulate typing, the
typeIn helper does that instead.
;
Simulate typing into a text field.
Typing in will change the node's
value, emit the
change event, along with a
keydown event for every character in the text.
;
WARNING!!!
This requires jasmine.promiseSuite to be available from jasmine_rsvp.
Update the component props. Returns an
RSVP.Promise that fulfills when the component has been re-rendered with the new props.
;
Similar to
#setProps() but for the component's internal state, which generally you should avoid touching.
toExist()
Shortcut for testing that a certain element exists in the DOM.
// Instead of:// You can do:;
toSendAction(nameOrOptions)
WARNING!!!
This requires jasmine.promiseSuite to be available from jasmine_rsvp.
This is very opinionated and most likely won't apply to your code, but it does to mine so it's here. My components are glued to only "emit" actions when there's any sort of processing required. This allows me to decouple the UI (components) from actual domain logic handling, and consequently, it makes testing component interaction pretty slick.
To test if a component is sending actions correctly, a custom matcher
is exposed to your suite, called
toSendAction(). The matcher accepts either a string, which would be the event such as
user:signup, or
account:save, or a more descriptive object (see below.)
Example:
;
Example 2: verifying the correct parameters are being sent.
We can also test the parameters the component is sending. Let's assume we have a
Preferences component that sends the
updatePreferences action
with the user's chosen preferences.
var Preferences = React.createClass({render: function() {return (<form><select ref="weekday"><option value="friday">Friday</option><option value="saturday">Saturday</option><option value="sunday">Sunday</option></select><button id="save">Save</button></form>);},onClick: function() {this.sendAction('updatePreferences', {weekday: this.refs.weekday.value});}});
If the user chooses Friday to be their starting day of the week, the component should emit the right action with that weekday:
;
MIT
|
https://www.npmjs.com/package/jasmine_react
|
CC-MAIN-2017-04
|
refinedweb
| 561
| 59.09
|
Program to print the minimum steps to reach 0 in C++
In this tutorial, we will learn how to print the minimum steps to reach 0 or zero in C++.
The program accepts the number ‘N’ and any integer ‘K’.
The program prints the minimum number of steps required to reach 0 from N based on certain steps
If N is divisible by K then divide N by K
If N is not divisible by K then decrement N by 1.
How to find the minimum steps to reach zero?
If the number N is 29 and K is 3, the program checks for the divisibility of N by K if so, then the N is divided by K.
If the divisibility is not satisfied then decrement N by 1.
For N=29 the steps are:
29 -> 28 -> 27 -> 9 -> 3 -> 1 -> 0
Therefore the minimum steps to reach zero is 6 hence it is printed as output.
INPUT:
#include<iostream.h> #include<bits/stdc++.h> unsigned long long int n,k; cout<<"Enter the N value :"<<"\t"; scanf("%llu\n",&n); cout<<"Enter the K value :"<<"\t"; scanf("%llu",&k);
In the above code,
The values are obtained using scanf() .The scanf() statement can be used in C++ only when the header file #include<bits/stdc++.h>
For large integer values, N and K values <= 10^18 we can use unsigned long long int data type.
INPUT 1:
Enter the N value : 29 Enter the K value : 3
C++ code to print the minimum steps to reach 0 :
#include <iostream> #include<bits/stdc++.h> using namespace std; int main() { unsigned long long int n,k,cnt=1,flag; cout<<"Enter the N value:"<<"\t"; scanf("%llu",&n); flag=n; cout<<"\nEnter the K value:"<<"\t"; scanf("%llu",&k); while(n!=0){ unsigned long long int quo,rem; quo=n/k; rem=n%k; if(n>=2){ if(rem!=0){ n=n-rem; if(n!=0){ cnt+=rem; } else{ cnt+=(rem-1); } } else{ n=n/k; cnt++; } } else if(n==0){ break; } else{ n/=k; if(n!=0){ cnt+=rem; } } } cout<<"\nThe minimum steps from "<<flag<<" to reach zero : "<<cnt; }
In the above code,
The number N and X are obtained as input. The quotient and remainder are stored in the separate variable.
The count variable is initialized with 0, to count the number of steps to reach 0.
If n is greater than or equal to 2 and remainder not equal to 0 then the remainder is subtracted from the integer n.
The n is also checked whether it is not equal to 0 if so the count value is incremented with respect to rem. Else adds rem -1 to the count.
If the rem is equal to zero then the integer is divided by k to move towards 0 and the count is incremented by 1.
The else if statement checks whether n is equal to 0, if so the program prints the count.
If n is less than 0 then the program divides the integer by k and the count is incremented accordingly with respect to rem and the count is printed as output.
OUTPUT 1:
The minimum steps from 29 to reach zero : 6
INPUT 2:
Enter the N value : 98765432346789 Enter the K value : 9876543467898
OUTPUT 2:
The minimum steps from 98765432346789 to reach zero : 9876541135717
Thus the program prints the minimum steps to reach 0.
We hope this tutorial helped you to understand how to print the count of steps to reach zero in C++.
Also learn:
|
https://www.codespeedy.com/cpp-program-to-print-the-minimum-steps-to-reach-0/
|
CC-MAIN-2020-24
|
refinedweb
| 593
| 67.59
|
Type Classes With An Easier Example
I recently wrote this post. It contained a lot of rather obtuse mathematics, just to introduce what was basically the example problem for the article. That’s because it was a real honest-to-goodness problem I was solving and writing papers on for algebra journals… admittedly, not the best choice for a blog aimed partly at non-mathematicians. Here, I do something similar, but with a much simpler and more general purpose problem.
Overview
We’ll be playing with converting matrices to an upper triangular form, essentially using Gaussian elimination. As a reminder, here’s what that means (I’m simplifying a little):
Goal: I have a matrix, and I’d like it to be upper triangular. (Upper triangular means that everything below the main diagonal is zero.)
Rules: I’ll allow myself to do these things to the matrix: swap any two rows or columns, or add any multiple of a row or column to another one.
Strategy: I’ll look for a non-zero element in the first column. If there is one, I’ll swap rows to move it to the first row of the matrix. Then I’ll add multiples of the first row to all of the other rows, until the entire rest of the first column is equal to zero. Then (here’s where we get a little tricky) I can just do the same thing on the rest of the matrix, ignoring the first row and column. In other words, I’ve reduced the problem to a smaller version of itself. (Yep, it’s a recursive algorithm.)
The Trick Up Our Sleeve: Instead of writing our function to operate directly on some representation of matrices, we’ll make it work on a type class. This will let us play all sorts of cool tricks.
Preliminary Stuff
I’d like to be able to declare instance for my matrices without jumping through newtype hoops, so I’ll start with a language extension.
{-# LANGUAGE TypeSynonymInstances #-}
Imports:
import Data.Maybe
Next, I need a few easy utility functions on lists. There’s nothing terribly interesting here; just a swap function, and a function to apply a function to the nth element of a list.
swap :: Int -> Int -> [a] -> [a] swap i j xs | i == j = xs | i > j = swap j i xs | otherwise = swap' i j xs where swap' 0 j (x:xs) = let (b,xs') = swap'' x (j-1) xs in b : xs' swap' i j (x:xs) = x : swap' (i-1) (j-1) xs swap'' a 0 (x:xs) = (x, a:xs) swap'' a j (x:xs) = let (b,xs') = swap'' a (j-1) xs in (b, x:xs') modifynth :: Int -> (a -> a) -> [a] -> [a] modifynth _ _ [] = [] modifynth 0 f (x:xs) = f x : xs modifynth n f (x:xs) = x : modifynth (n-1) f xs
Finally, I need a type to represent matrices. Since this is just toy code where I don’t need really high performance, a list of lists will do just fine.
type Matrix = [[Double]]
All done. On to the interesting stuff.
Building a Type Class
I already mentioned that I don’t want to operate directly on the representation of a matrix as a list of lists. Instead, I’ll declare a type class capturing all of the operations that I’d like to be able to perform.
class Eliminable a where (@@) :: a -> (Int,Int) -> Double size :: a -> Int swapRows :: Int -> Int -> a -> a swapCols :: Int -> Int -> a -> a addRow :: Double -> Int -> Int -> a -> a addCol :: Double -> Int -> Int -> a -> a
I’ve reserved the operator @@ to examine an entry of a matrix, size to give me its size, and then included functions to swap rows, swap columns, add a multiple of one row to another, and add a multiple of one column to another. Now I just need an implementation for the concrete matrix type I declared earlier.
instance Eliminable Matrix where m @@ (i,j) = m !! i !! j size m = length m swapRows p q m = swap p q m swapCols p q m = map (swap p q) m addRow k p q m = modifynth q (zipWith comb (m!!p)) m where comb a b = k*a + b addCol k p q m = map (\row -> modifynth q (comb (row!!p)) row) m where comb a b = k*a + b
Done.
Programming With Our Type Class
Recall that the substantial portion of the elimination algorithm earlier was to zero out most of the first column of the matrix, leaving only the top element possibly non-zero. We’re now in a position to implement this piece of the algorithm. It’s not all that tricky.
zeroCol :: Eliminable a => a -> a zeroCol m = let clearRow j m' = addRow (-(m' @@ (j,0) / m' @@ (0,0))) 0 j m' clearCol m' = foldr clearRow m' [1 .. size m - 1] in case listToMaybe [ i | i <- [0 .. size m - 1], m @@ (i,0) /= 0 ] of Nothing -> m Just 0 -> clearCol m Just i -> clearCol (swapRows 0 i m)
This function looks only at the first column of the matrix, and clears it out my moving a non-zero element to the top, and then adding the right multiple of that column to all those below it. The important thing to notice is that this was implemented for any arbitrary instance of the type class I called “Eliminable”. This will be incredibly useful in the next few steps.
Using the Type Class
The next part of the algorithm is to ignore the first row and column, and perform the same operation on the submatrix obtained by deleting them. It’s actually a bit unclear how to implement this. We have a few options:
- Modify zeroCol above, to have it take a parameter representing the current column, and do everything relative to the current column. This is pretty messy. It actually might not be too messy in this case, but if the algorithm I were implementing were a little less trivial to begin with, It might definitely be quite messy.
- Actually perform the elimination on a separate matrix, and then somehow graft the first row and column from this matrix onto that one. Again, this could get pretty messy in general.
- Change the representation of the submatrices.
I’ll choose the third. Luckily, this isn’t too tough, since we have a type class. I’ll just define a newtype, and a new instance, to encapsulate the idea of a matrix with the first row and column deleted.
newtype SubMatrix a = SubMatrix { unwrap :: a } instance Eliminable a => Eliminable (SubMatrix a) where (SubMatrix m) @@ (i,j) = m @@ (i+1,j+1) size (SubMatrix m) = size m - 1 swapRows p q (SubMatrix m) = SubMatrix (swapRows (p+1) (q+1) m) swapCols p q (SubMatrix m) = SubMatrix (swapCols (p+1) (q+1) m) addRow k p q (SubMatrix m) = SubMatrix (addRow k (p+1) (q+1) m) addCol k p q (SubMatrix m) = SubMatrix (addCol k (p+1) (q+1) m)
Using this new instance, I can easily complete the elimination algorithm.
eliminate :: Eliminable a => a -> a eliminate m | size m <= 1 = m | otherwise = unwrap . eliminate . SubMatrix . zeroCol $ m
Yep, that’s all there is to it, and we have a working elimination algorithm.
Type Class Games
Suppose, now, that I want a lower triangular matrix. It might initially seem that I’m out of luck; I need to write all this code again. That turns out not to be the case, though. If I just teach the existing code how to operate on the transpose of a matrix instead of the matrix I’ve given it, then all is well! Here goes.
newtype Transposed a = Transposed { untranspose :: a } instance Eliminable a => Eliminable (Transposed a) where (Transposed m) @@ (i,j) = m @@ (j,i) size (Transposed m) = size m swapRows p q (Transposed m) = Transposed (swapCols p q m) swapCols p q (Transposed m) = Transposed (swapRows p q m) addRow k p q (Transposed m) = Transposed (addCol k p q m) addCol k p q (Transposed m) = Transposed (addRow k p q m)
To implement the lower triangular conversion, now, is simple.
lowerTriang :: Eliminable a => a -> a lowerTriang = untranspose . eliminate . Transposed
Any number of changes to the operation we’re trying to perform can often be expressed by simply substituting a different representation for the type on which we’re performing the operation. (Thinking about this fact can actually get pretty deep.)
Side Calculations
There’s a fairly common problem that many people run into when moving from an imperative language to a functional one. This can apply to learning functional programming, converting existing imperative code, or even just translating the concepts in one’s mind when talking to someone who thinks imperatively. The problem goes something like this: you have some code that performs some computation, and now you want to change the code to add some new concept to the existing computation. Often, the new idea you’re trying to add could be performed trivially in an imperative language, by adding print statements to some function seven layers in, or by keeping track of some value in a global variable, or in some field of some object. In the functional setting, these aren’t available to you.
The minimalist answer is simply to add all the plumbing code; new parameters and return values, etc. to every function in the entire call tree. To say the least, this is unappealing! To a new Haskell programmer, the obvious answer often looks like monads. However, again, the entire call tree has to be rewritten in a monadic style, and besides, this is a tad like using a rocket launcher to rid the house of mice.
The solution I propose here is that many times, it’s sufficient to use a type class. Here’s an example.
Problem: Calculate the determinant of a matrix efficiently.
Determinants can be calculated in a lot of different ways, but one of the most common uses elimination. The interesting fact here is that once you’ve got a triangular matrix (lower or upper; doesn’t matter), then its determinant is just the product of its diagonal elements. Furthermore, we know precisely what happens to the determinant when you swap rows or columns (it flips sign, but the magnitude stays the same), or when you add a multiple of one row or column to the other (it stays the same). So a (very fast) way to calculate a determinant is to perform elimination, but also keep track, at each step, of what you’ve done to the determinant so far.
So now we need, not merely a matrix, but a pair consisting of a matrix and some side information – namely, which change we’ve made so far to the determinant.
data WithDeterminant a = WithDeterminant Double a instance Eliminable a => Eliminable (WithDeterminant a) where (WithDeterminant _ m) @@ (i,j) = m @@ (i,j) size (WithDeterminant _ m) = size m swapRows p q (WithDeterminant d m) = WithDeterminant (-d) (swapRows p q m) swapCols p q (WithDeterminant d m) = WithDeterminant (-d) (swapCols p q m) addRow k p q (WithDeterminant d m) = WithDeterminant d (addRow k p q m) addCol k p q (WithDeterminant d m) = WithDeterminant d (addCol k p q m)
As before, once we’ve defined the appropriate instance, the implementation is actually quite easy.
diags :: Matrix -> [Double] diags [] = [] diags (r:rs) = head r : diags (map tail rs) determinant :: Matrix -> Double determinant m = let WithDeterminant d m' = eliminate (WithDeterminant 1 m) in d * product (diags m')
The resulting determinant function actually performs quite well. For example, calculation of the determinant of a 100 by 100 matrix is done in 1.61 seconds, fully interpreted in GHCi. I didn’t bother compiling with optimization to see how well that does, nor replacing the inefficient list-of-lists representation of matrices with one based on contiguous memory or arrays. (Edit: Compiled and optimized with GHC, but still using the list-of-list representation, the time is around a third of a second.)
(It’s worth pointing out that automatic differentiation is another very impressive example of this same technique, except that it uses the standard numeric type classes instead of a custom type class.)
Conclusion
The point of this article is that plain old type classes in Haskell can be used to make your purely functional code very flexible and versatile. By defining a type class to capture the concept of a set of related operations, I was able to achieve:
- Choice of data structures. Had I wanted to use a contiguous array instead of a list of lists, I could have easily done so.
- Easier programming. For example, operating on a submatrix of the original matrix became much easier.
- Flexible code. I was able to get lower triangular matrices, too, without rewriting the code.
- Better composability. I easily reused by upper triangular matrix calculation to find determinants, even though additional calculations had to be traced through the original.
- Separation of concerns. When I started, I never even dreamed that I might need to trace determinant calculations through the process. That got added later on, in it’s own separate bit of code. If someone else wanted different plumbing… say, for logging, or precondition checking, or estimating the possible rounding error, all they’d need to do is define a new instance of the type class.
None of this is new, of course. But type classes are definitely an underused language feature by many Haskell programmers.
A side note: If you compile this code and generate a random 100 x 100 matrix with entries between 0 and 1, as I did, you’ll find the determinant is somewhere in the range of 10 to the 25th power! At first, I thought this was an error. It’s actually correct, though. Think of it this way: the determinant of a 100 x 100 matrix is the sum of 100! (that’s 100 factorial) signed elementary products. Half are positive, and the other half are negative. If you look at the expected value of X^100, where X is a uniformly distributed random variable over (0,1), you get something like 1e-40, a really small number. But, half of 100 factorial is about 5e+157, a really, really big number. Their product is still very large: about 5e+117. The actual determinant the difference between two such numbers, and its expected value (even simplifying as I am with some entirely incorrect assumptions of independence) depends on the variance as well, but it’s indeed quite easy to see how the difference between two numbers this large can be so large; in fact, it’s surprising that it isn’t larger.
Side note. I wonder, what is the most ideomatical code to generate matrix of given size in Haskell…
There is another underused feature you are using, specifically in the definition of eliminate. A hint, if necessary: try commenting out the type signature of eliminate.
You post impressed me so much, that I’ve reimplemented/copied your code, with some niceties like Type Families :)
Wow, that’s very nice. Thank you! The type synonym for elements works out well.
I made a couple changes to your paste; hope you don’t mind. I fixed the transposed instance so that your makeBottomLeftDiagonal works. I also avoided a (swap 0 0) in one place — you either needed to do that, or else change the WithDeterminant instance so it doesn’t negate the determinant when you swap a row or column with itself.
makeDiagonal is still not guaranteed to work for singular matrices, but will work for matrices of full rank. For a counterexample, consider (makeDiagonal $ ListMatrix [[0,0],[2,3]])
Oh, you’re right, greate! Maybe there is a place for this code on Hackage?
makeDiagonal was supposed to solve systems of linear equations. Something like:
solve = diagonal . makeDiagonal
Victor, go ahead. All of the code I post on my blog is in the public domain.
That was gorgeous. I hope you do more like this.
|
https://cdsmith.wordpress.com/2009/09/20/side-computations-via-type-classes/?like=1&_wpnonce=d1fb2cdf27
|
CC-MAIN-2015-32
|
refinedweb
| 2,679
| 59.13
|
Tell us what’s happening:
I’m doing the beta challenges for react/redux. I’m stuck on this last part. I will copy the directions and highlight the sentence I’m having trouble with:
React and Redux: Extract Local State into Redux
You’re almost done! Recall that you wrote all the Redux code so that Redux could control the state management of your React messages app. Now that Redux is connected, you need to extract the state management out of the Presentational component and into Redux. Currently, you have Redux connected, but you are handling the state locally within the Presentational component.
In the Presentational component, first, remove the messages property in the local state. These messages will be managed by Redux. Next, modify the submitMessage() method so that it dispatches submitNewMessage() from this.props, and pass in the current message input from local state as an argument. Because you removed messages from local state, remove the messages property from the call to this.setState() here as well. Finally, modify the render() method so that it maps over the messages received from props rather than state.
Once these changes are made, the app will continue to function the same, except Redux manages the state. This example also illustrates how a component may have local state: your component still tracks user input locally in its own state. You can see how Redux provides a useful state management framework on top of React. You achieved the same result using only React’s local state at first, and this is usually possible with simple apps. However, as your apps become larger and more complex, so does your state management, and this is the problem Redux solves.
What does this mean to do exactly? I believe I have a pretty good idea of what it wants, but knowing what syntax to use I think is the hard part. Any help?
Your code so far
// Redux: const ADD = 'ADD'; const addMessage = (message) => { return { type: ADD, message: message } }; const messageReducer = (state = [], action) => { switch (action.type) { case ADD: return [ ...state, action.message ]; default: return state; } }; const store = Redux.createStore(messageReducer); // React: const Provider = ReactRedux.Provider; const connect = ReactRedux.connect; // Change code below this line class Presentational extends React.Component { constructor(props) { super(props); this.state = { input: '' } this.handleChange = this.handleChange.bind(this); this.submitMessage = this.submitMessage.bind(this); } handleChange(event) { this.setState({ input: event.target.value }); } submitMessage() { this.setState({ input: '' }); } render() { return ( <div> <h2>Type in a new Message:</h2> <input value={this.state.input} onChange={this.handleChange}/><br/> <button onClick={this.submitMessage}>Submit</button> <ul> {this.props.messages.map( (message, idx) => { return ( <li key={idx}>{message}</li> ) }) } </ul> </div> ); } }; // Change code above this line const mapStateToProps = (state) => { return {messages: state} }; const mapDispatchToProps = (dispatch) => { return { submitNewMessage: (message) => { dispatch(addMessage(message)) } } }; const Container = connect(mapStateToProps, mapDispatchToProps)(Presentational); class AppWrapper extends React.Component { render() { return ( <Provider store={store}> <Container/> </Provider> ); } };
Your browser information:
User Agent is:
Mozilla/5.0 (X11; CrOS x86_64 10323.62.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3325.184 Safari/537.36.
Link to the challenge:
|
https://forum.freecodecamp.org/t/extract-local-state-into-redux/182622
|
CC-MAIN-2018-39
|
refinedweb
| 516
| 51.34
|
Java Get Example
java.net.InetAddress class.
Java Example program to get... example to get Object class name at runtime
In java there is a way that makes us...
can get the name of the object class.
Java example program
Java Get Example
example to get Object class name at runtime
In java there is a way that makes us...
can get the name of the object class.
Java example....
Java get Next Day
In the given example, we have used the class
Java example to get Object class name at runtime
Java example to get Object class name at runtime
java get Object class name
In java...; by calling the method getName() we
can get the name of the object class.
In our example
Java Example program to get the current working directory
Java Example program to get the current working
directory
This example java program illustrates to get the
current working directory (CWD). With the File class
Java get decimal value
Java get decimal value
We can also get the decimal value in the specified
format as we wish by calling DecimalFormat class. It will format a long
value to the specified
Calling Destructor
Calling Destructor How can i call destructor in java using System.gc() or manually. Please anyone give me a example program
Java...*;
public class FinalizeObject extends FileOutputStream {
// Contructor
Class
there are two
classes has been used except the main class in which the main function... relates to the human being. Class has many
other features like creation...
and rectangle class are calling with different - different arguments two times
Java Get Memory
;
In this section of Java Example, you will see how to get the memory size
using Java programming language. This is a very simple example and you can
easily learn it.
In this Java Get Memory Size example, we have created
how to get session object in simple java class??
how to get session object in simple java class?? i am fallowing...) GroupDetailsDao.java (it is also a simple java class which have the jdbc logic... java class (GroupDetailsDao.java)
please help me
Get Calling Class
Get Calling Class
In this example we will find the name of the class name.
Description of the method used in the example:
getStackTrace: Provides programmatic access
get the value from another class - Java Beginners
get the value from another class Hello to all, I have stupid...().trim();
[/code]
How I can get String alsl = ((Node)flnameTEXT.item(0... javax.xml.transform.stream.StreamResult;
public class xmlRead{
static public void main(String[] arg
Get current working directory in Java
to get current directory in Java with Example
public class...Get current working directory in Java
In this section we will discuss about how to get the the current working directory in
java. Current working
Example program to get all the available time zones
to get all the
available time zones using java program. This example is very simple java code
that will list all the available time zones. We have used...
Example program to get all the available time zones
Java Get Class Location
Java Get Class Location
In this section, you will learn how to get the class location. For this, we
have used the classes URL and ClassLoader. The class
Java get class directory
Java get class directory
... to fetch the class directory's path by the class name. To get the class
directory's... an object instance.
We have created a class GetClassDirectory and to
get the class
In Java how to get size of class and object?
In Java how to get size of class and object? In Java how to get size of class and object
Java program to get class name without package
Java program to get class name without package ... example which
describes you that how you can get the class name without package.... As a result
we will finally get the class name without package.
In this code
Calling servlet from servlet .
Calling servlet from servlet . How to call a servlet from another servlet in Java?
You can call another servlet by using... javax.servlet.http.HttpServletResponse;
public class CallServlet extends HttpServlet {
protected void
Get Java Version
a class name Get Java Version. Inside
the main method we declared a string...
Get Java Version
The Java has a lot changes since JDK1.0 as well as numerous additions Date and Time
: Using the Date and calender class we can get the
date and time in java. Using... class, using getTime() we can get
the time. Now here is the example...;
for formatting the date an time. Java provide Date class which
Java get Next Day
Java get Next Day
In this section, you will study how to get the next day in java using
Calendar class.
In the given example, we have used the class
Java get File Type
Java get File Type
This section illustrates you how to get the file type. In the given example,
we have create an instance of the class File and passed the file 'Hello.txt
Get and Display using JOptionPane
Get and Display using JOptionPane Hello everyone,
I have been... on in because this is my first time handling java. She ask to get some data...; import javax.swing.*;
class JOPtionPaneExample
{
public static void main
Java file get name
Java file get name
In this section, you will learn how to get the name of the file.
Description of code:
You can see in the given example, we have created an object of File class and
specified a file in the constructor. Then we have
Get computer name in java
Get computer name in java
We can get the computer name by the java code program.
For getting computer name we have used java.net.InetAddress class. We
will use static
Java file get size
Java file get size
In this section, you will learn how to get the size of a file.
Description of code:
You can see in the given example, we have created an object of File class and
specified a file in the constructor. The object
Java get User Home
Java get User Home
In this section, you will study how to get the user home. We are providing
you an example which will obtain the home directory by using
Java Get Class path
Java Get Class path
In this section, you will learn how to get the class path. The method
System.getProperties() determines the current system properties.
Example to show class exception in java
Example to show class exception in java
... to show the use of
class exception in java .Exceptions are the condition in Java to indicate a calling
method when an abnormal condition has occurred
The Ultima Linux 8 has been released
The Ultima Linux 8 has been released
The Ultima Linux 8 is
the latest..., but the process was, and for the very first version, 0.1, it
showed... it had been born.
Some feature of The Ultima Linux.
Your choice of kernels
The Ultima Linux 8 has been released
The Ultima Linux 8 has been released
The Ultima Linux 8 is the latest... them...
The only Linux hackers are the good kind; no one can get in unless... was, and for the very first version, 0.1, it showed. The installation was a mess
HOW TO BECOME A GOOD PROGRAMMER
HOW TO BECOME A GOOD PROGRAMMER I want to know how to become good...:
CoreJava Tutorials
Here you will get lot of examples with illustration where you can learn java easily and make a command over core java to proceed further.
Thanks
Protecting Your iPad Has Never Been Easier
of accessories for the iPad, each of which has been designed to ensure you get the most out...Protecting Your iPad Has Never Been Easier
Since its release early in 2010, the iPad has been flying off the shelves in stores all over the world
get median in java
get median in java How to get the median value in Java from the given list.
import java.util.*;
class FindMedian {
public static int findMedian(int array[]) {
int length=array.length;
int[] sort = new int[length
Java get System Locale
Java get System Locale
In this section, you will learn how to obtain the locale. We are
providing you an example which will obtain the locale of the system
Get Column Count using ResultSet in SQL
, the driver class loaded by calling
Class.forName() with the Driver class. Class... methods which is used to
communicate with database. This class has three methods... Get Column Count using ResultSet in SQL
Java example to get the execution path
Java example to get the execution path
get execution path
We can get the execution... java installation directory
java.class.path java class path
Here
JSP bean get property
bean get property. In this example we
define a package bean include a class... is used to give the
reference of the bean class.
<jsp:get Property>...
JSP bean get property
Get IP Address in Java
Get IP Address in Java
In this example we will describe How to get ip address in java. An IP address
is a numeric unique recognition number which....
In this example We have used for method:
InetAddress().
getLocalHost
Driver Manager Class
is a very important class that defines objects which connect Java applications...().
By calling the Class.forName() method the driver class get automatically... their driver implementation class get loads.
This class has many methods. Some
Java get byte array from file
Java get byte array from file what is the code example in Java for getting byte array from file in memory.
Thanks
This is very simple code in Java. Following tutorials will teach you how to achieve this.
Java
Java get middle character
Java get middle character
In this tutorial, you will learn how to get middle character from the word.
Here is an example of getting the middle letter... use the below code.
Example:
import java.util.*;
class GetMiddleCharacter
FTP Server: Get Buffer size
FTP Server: Get Buffer size
This tutorial helps you to get the current internal buffer size on FTP server
using java.
Get Buffer Size :
You can find out... class method.
int getBufferSize() : This method returns int
value. It fetches
Class Cast Exception Example in java
Class Cast Exception Example in java
We are going to describe class cast... into another data type. A Class cast exception is a thrown by
java. In this example... in Java to indicate to a calling method that an
unnatural condition has occurred
Using get method in preferences
will get
the output given below .
Here is the Output of the Example :
C:\anshu>javac get.java
C:\anshu>java get
rose...
Using get method in preferences
calling java beans - JSP-Servlet
calling java beans Sir,
I want to know where to place the java beans java file and class file inside tomcat web server. and how to call them from jsp file. Hi Friend,
Java Bean is placed in classes\form
Get Environment Variable Java
Get Environment Variable Java
In this example we are getting environment variable. To
get... of the System class and gets the information about environment variable.
get
Get Usage Memory Example
Get Usage Memory Example
... in understanding Get Usage
Memory. The class Memory Usage include the main method... ( ) - This method return you
the total amount of memory that your java
Java Wrapper Class Example
Java Wrapper Class Example
In this section we will read about how to use... to objects of that class. In
Java there are eight primitive data types and each data type has the
corresponding wrapper class. These wrapper classes belongs
Java get Http Headers
Java get Http Headers
In this section, we are going to illustrate you how to get.... Then compile the class and execute the example,
you
9-slice scaling with embedded image in Flex4
9-slice scaling with embedded image in Flex4:
You can use the 9-slice scaling with embedded images.
You can display the image in horizontal and vertical...
dividing line from the left side of the image.
Example:
<?xml
Get Property by Name
to get Property by Name. For this we have a class name "Get
Property...
.style1 {
margin-right: 75px;
}
Get Property... to get the Properties of System-
1)System.getProperties ( ) -
How to Get Started with Java?
A Java Beginners Tutorial - How to Get Started with Java?
For a programmer Java offers some challenges that are uncommon with other environments and to get... system it is in use. So to get started with Java the first step would
Example of a class variable (static variable)
. For example in the class
StaticVariable each instance has different copy of a class variable. It will be
updated each time the instance has been called. We... see in this example each object has its
own copy of class variable
Get All Keys and Values of the Properties files in Java
Get All Keys and Values of the Properties files in
Java... to get all keys and
values of the properties files in the Java. This section... file through an example. The example
is given bellow.
Description of program
calling function - JSP-Servlet
calling function Hai,
How to call a java file in jsp file... :
1.Create a jsp page "test.jsp"
Example of Extends
Attribute of page... Directive and import java file "Extends.java"
package roseindia;
public
Java Get Method
Java Get Method
In this example you will learn how to use the get method in Java.
Java... to use get method in Java.
The example is going to show the date details
How to get the capacity of ByteBuffer in java.
How to get the capacity of ByteBuffer in java.
..., we will discuss how to get the capacity of buffer. The ByteBuffer class is a container for handling data.
The ByteBuffer class is available
How to get the output of jsp program using Bean
How to get the output of jsp program using Bean Hello my Roseindia netizen has suggested one program but when i implement the same i am not getting...
public class CounterBean implements java.io.Serializable {
int coun = 0
Java example program to get the object's ID
Java example program to get the object's ID
java get Object id
In java there is no any specific method that provides
us the object's ID. But each object has its own unique
CORE JAVA get middle name
CORE JAVA get middle name hello sir...how to get middle name using string tokenizer....???
eg..like name ANKIT it will select only K...!!!!
... character from the name.
import java.util.*;
class GetMiddleCharacter
{
public
Java get windows Username
Java get windows Username
In this section, you will learn how to obtain the window's username. We are
providing you an example which will obtain the window's username
Embedded font using Actionscript in Flex4
Embedded font using Actionscript in Flex4:
In this example you can see how we can use a embedded
font using actionscript.
Example:
<?xml...;
<!-- Embedded font
using Action Script -->
<fx:Style>
Java file get total space
Java file get total space
In this section, you will learn how to get the total... example, we have created an instance of a File to
represent a disc of our file system. Then we have called the method
getTotalSpace() of class File through
Calling java class method from javascript
Calling java class method from javascript I want to call a java class method from javascript. The java class method is like "public String[] getWord()". I want to assign this string array value to a variable in javascript. I
mahesh want to know java with good understanding
mahesh want to know java with good understanding I need to know... an example program and it's output also.please teach me...
Java Beans...-counter.shtml
http
Java file get parent directory
Java file get parent directory
In this section, you will learn how to get... and directories. You can see in the given example we
have created a File object....
Here is the code:
import java.io.*;
public class GetParentDirectory
get absolute time java
get absolute time java How to get absolute time in java
Java get Stack Trace as String
Java get Stack Trace as String
In this section, you will learn how to get the stack trace of an exception as
a String. We are providing you an example, which is throwing
java get file permission
java get file permission How to check the get file permission in Java
Calling Constructor in Spring
Calling Constructor in Spring
In the given example you will be learning about a
constructor... and retrieving
the values defined in the constructor using java file.
OOPs concepts in Java
Object-Oriented Programming (OOPs) concepts in Java helps in creating programs that are based on real world.
OOPs is very flexible and compatible and hence.... It contains data and codes with behavior.
In Java, Class is a structure inside which
Get an Editor to polish your pages
of the user response that you might get through feedbacks. This is very important because...
Get an Editor to polish your pages
... as editors and give you their reports online and in print both.
Good editors.
Java program to get data type of column field
Java program to get data type of column field
In this example java program we have to get... database
connection via JDBC and after connecting database we have get the data
Uses of abstract class & interface - Java Beginners
Uses of abstract class & interface Dear sir,
I'm new to java. I knew the basic concepts of interface and the abstract class. But i dont... that extends the abstract class has to provide the implementation to the abstract
Open Source Database
barring a few features and has been made available in both Embedded and Network... Database Benchmark
To date, there has been no easy way to benchmark... procedures and triggers. It has been used in production systems, under a variety
Java program to get current date now
Java program to get current date now
In this example program we have to get the current date
at present now. We have used Calendar class for getting the current date
Java get available Font
Java get available Font
In this section, you will learn how to get the available....
The class
GraphicsEnvironment describes the collection of all
How to get data datagrid ?
How to get data datagrid ? How to get data datagrid in jsp from arrayList in java?
Code in java
CategoryDao dao = new CategoryDao...;
<td bgcolor="#888888" width="120" align="center" class="typeA"><
calling method - Java Beginners
calling method class A{
public void fo(){
class B{
public void fo1(){
System.out.println("fo1 of class B");
}
}
System.out.println("fo of class A");
}
class C{
public void fo2
FTPClient : Get System Type
This section explain how to get system type by using method of FTPClient class in java
problem java - Design concepts & design patterns
();
}
JOptionPane.showMessageDialog(null,"Your file has been written... the problem :
import javax.swing.JOptionPane;
public class FileWrite{
public static... Connect Example.");
Connection conn = null;
String url = "jdbc:odbc:hyt
MX Effect with embedded font in Flex4
support the Flash
Text Engine(FTE) or an embedded font. In this example you can see how we can use
an embedded font with effect. In the following example we rotate a TextArea
which has an embedded text font. When we rotate the first
plz help -java project very urgent
plz help -java project very urgent ? Ford furniture is a local... person will get 60% of the value above half price. For example, if a luxury bed is priced at 6000, and was sold at 30% discount, the sales person will get 60
Java get System TimeZone
Java get System TimeZone
... to as the local
time. We have used the instance of the class TimeZone in order to get...()- This method returns the id of the time zone.
Understand with Example
In this example, we
Submit project to get developed
project has been accepted by us. For immediate processing
please provide...++, Borland c++,
Visual c++, Visual Basic, Java, Web based Projects, Client... is made very simple.
* The Process starts when you submit
Java program to get the desktop Path
Java program to get the desktop
Path
In this example program we have to get the desktop path
of the system. In the java environment we can get the desktop path also
Java get File Timestamp
Java get File Timestamp
... instance of class
BufferedReader to get output from process.
... timestamp.
The following example shows the modification date and time
Java Get File Name
Java Get File Name
....
In order to get the file name, we have provided the path of file 'Hello.txt"
to the constructor of class File. The method getName() returns the
name
Get Image
Get Image
This Example shows you get the image.
Description of the code....
Toolkit.getImage() :
getImage method return a Image class object and this object get
Java Get Memory
|
http://www.roseindia.net/tutorialhelp/comment/94678
|
CC-MAIN-2013-48
|
refinedweb
| 3,510
| 64.91
|
Never Should You Ever In Kubernetes Part 5: Your Top 4 FAQs
Written By: Danielle Cook
As we’ve discussed in this series, there are some things that you should simply never, ever do in Kubernetes. Corey Quinn, founder of Screaming in the Cloud and Chief Cloud Economist at The Duckbill Group, Kendall Miller, President of Fairwinds, and Stevie Caldwell, Technical Lead (CRE Team) at Fairwinds had a great conversation about some of the basic mistakes they’ve seen, as well as some of the most common Kubernetes security, reliability, and efficiency problems they’ve encountered over the last several years helping customers deploy Kubernetes. During the webinar we had a number of questions from the attendees, which we’re happy to share with you. Let’s get to your top four FAQs about what development and operations teams should never ever do in Kubernetes if they want to get the most out of the leading container orchestrator.
1. Could (or should) we run combined worker and control plane nodes?
The answer is a hard no. It’s better to scale down the size of the nodes for the control plane and call it good. This goes back to the “don’t run workloads on your control plane” piece. There’s a reason that the best practice for this is to keep them separate. Usually these sorts of things are spurred on by cost concerns, but there are other better and safer ways to run a cluster and control your costs than trying to combine your nodes all into one. You’re already taking a middle ground by having your
etcd cluster also co-located with the control plane. (Some Kubernetes professionals say that you should run
etcd externally from the control plane.) If you decrease the size of your control plane nodes, you can right-size resources for the worker nodes so that you're not using too much, and you should make use of auto scaling to scale down your cluster. There are all sorts of things that you can do to be economical about running Kubernetes that aren't unsafe, but don't run combined worker and control plane nodes.
2. Should I run a database in Kube?
The data store should be behind an API, and what is powering that data store is really an implementation question. If you’re going to run your database on Kubernetes, you need to have a plan for volume, persistence, backups, restores, and so on. While the idea isn’t actively ridiculous, you do need to architect for it. Consider what happens if a container goes away and doesn’t get replaced for a little while, and that’s where your database is running?
If there are managed solutions offered within the environment that you’re running in, it’s worth looking at those. For example, if you’re running your workload on AWS, using their data store offerings, such as Aurora or RDS is a lot easier than rolling your own. It’s going to add a lot of complexity, so why do it if you don’t have to? One of the premises that Kubernetes was built upon is the idea that all the data is ephemeral. That’s why things move around as freely as they do, and why it doesn’t matter if you lose a node or two. But when we start getting into persistent data issues, as you do with databases, then you have to plan accordingly:
- Where are you storing the data?
- What happens when the pod that’s running your database needs to go somewhere else?
- Does the pod then get scheduled on a node that’s in another availability zone?
- If it is moved to another availability zone, is the volume that it was writing to no longer available to it?
- Do you have to set up persistent volumes and set up region affinity or node affinity?
When you run a database in Kube, it becomes very complex. So you need to think about what gain you’re going to get out of doing so. It might increase flexibility for some things, but for other things it doesn’t make sense, so consider your use case carefully, because it will add a lot of complexity to run your database in Kubernetes.
3. Is there a solution to K8s billing with cost breakdown?
For example, if you have one name space for each client and want to calculate the monthly and for class for each name space, how do you do that? We have a tool, Fairwinds Insights, that does relative daily cost and breaks it down by service. It also tells you which namespace those things are in and helps you optimize your resource requests, identifying what you’re requesting versus what you could be requesting. Fairwinds Insights also provides data that shows you how much you’re going to save over time.
There are other tools out there that do something similar. CloudZero, for example, is a cloud cost intelligence platform. Kubecost focuses specifically on the costs of Kubernetes. There’s another company that VMware acquired, CloudHealth, which provides some capabilities around optimizing cost and usage across cloud environments. Cloudability from Apptio helps teams optimize cloud resources for speed, cost, and quality.
People who care about Kubernetes workloads typically have two questions that are hard to find good answers for. One is tracking shared resources; for things that have to exist, how do you allocate and attribute those? The other question is related to data transfer, and there is no great tooling option available today. For example, if you are using AWS and you have two services talking to each other, that data transfer is either free, or if they’re in two availability zones for durability purposes, it costs two cents (2¢) per gigabyte. When we’re talking about petabyte scale workloads, that’s a lot of money.
4. Is it possible to use Fairwinds Insights on prem?
The short answer is yes — we now have a self-hosted version of Fairwinds Insights. Fairwinds Insights ingests data from Polaris, as well as nearly a dozen other Kubernetes audits (such as Trivy, Goldilocks, and kube-bench), and puts all the results in a single pane of glass. Since launching our SaaS offering, we found that some Polaris users had concerns about shipping data off to a third-party, especially enterprises in data-sensitive industries such as healthcare and finance. To address these concerns, we’ve worked hard for the past few months to build a version of Insights that can run entirely within the customer’s environment. We recently posted an article walking through our new self-hosted Fairwinds Insights, which explains how it works and some of the pros and cons for using an on prem version. If you’re interested in looking at Fairwinds Insights, check out the Fairwinds Insights demo.
Watch the entirely entertaining webinar on demand to learn what else you should never, ever do in Kubernetes.
|
https://fairwinds.medium.com/never-should-you-ever-in-kubernetes-part-5-your-top-4-faqs-d838d79d8d5e?source=post_internal_links---------6----------------------------
|
CC-MAIN-2021-49
|
refinedweb
| 1,171
| 67.59
|
List of all Keywords in C Language
Description of all Keywords in C
auto
The auto keyword declares automatic variables. For example:
auto int var1;
This statement suggests that var1 is a variable of storage class auto and type int.
Variables declared within function bodies are automatic by default. They are recreated each time a function is executed.
Since, automatic variables are local to a function, they are also called local variables. To learn more visit C storage class.
break and continue
The break statement makes program jump out of the innermost enclosing loop (while, do, for or switch statements) explicitly.
The continue statement skips the certain statements inside the loop.
for (i=1;i<=10;++i) { if (i==3) continue; if (i==7) break; printf("%d ",i); }
Output
1 2 4 5 6
When i is equal to 3, continue statement comes into effect and skips 3. When i is equal to 7, break statement comes into effect and terminates the for loop. To learn more, visit C break and continue statement
switch, case and default
The switch and case statement is used when a block of statements has to be executed among many blocks. For example:
switch(expression) { case '1': //some statements to execute when 1 break; case '5': //some statements to execute when 5 break; default: //some statements to execute when default; }
Visit C switch statement to learn more.
char
The char keyword declares a character variable. For example:
char alphabet;
Here, alphabet is a character type variable.
To learn more, visit C data types.
const
An identifier can be declared constant by using const keyword.
const int a = 5;
To learn more, visit C variables and constants.
do...while
int i; do { print("%d ",i); i++; } while (i<10)
To learn more, visit C do...while loop
double and float
Keywords double and float are used for declaring floating type variables. For example:
float number; double longNumber;
Here, number is single precision floating type variable whereas, longNumber is a double precision floating type variable.
To learn more, visit C data types.
if and else
In C programming, if and else are used to make decisions.
if (i == 1) printf("i is 1.") else prinf("i is not 1.")
If value of i is other than 1, output will be :
i is not 1
To learn more, visit C if...else statement.
enum
Enumeration types are declared in C programming using keyword enum. For example:
enum suit { hearts; spades; clubs; diamonds; };
Here, a enumerated variable suit is created having tags: hearts, spades, clubs and diamonds.
To learn more, visit C enum.
extern
The extern keyword declares that a variable or a function has external linkage outside of the file it is declared.
To learn more, visit C storage type.
for
There are three types of loops in C programming. The for loop is written in C programming using keyword for. For example:
for (i=0; i< 9;++i) { printf("%d ",i); }
Output
0 1 2 3 4 5 6 7 8
To learn more, visit C for loop.
goto
The goto keyword is used for unconditional jump to a labeled statement inside a function. For example:
for(i=1; i<5; ++i) { if (i==10) goto error; } printf("i is not 10"); error: printf("Error, count cannot be 10.");
Output
Error, count cannot be 10.
To learn more, visit C goto.
int
The int keyword declares integer type variable. For example:
int count;
Here, count is a integer variable.
To learn more, visit C data types.
short, long, signed and unsigned
The short, long, signed and unsigned keywodrs are type modifiers that alters the meaning of a base data type to yield a new type.
short int smallInteger; long int bigInteger; signed int normalInteger; unsigned int positiveInteger;
return
The return keyword terminates the function and returns the value.
int func() { int b = 5; return b; }
This function
func() returns 5 to the calling function. To learn more, visit C user-defined functions.
sizeof
The sizeof keyword evaluates the size of data (a variable or a constant).
#include <stdio.h> int main() { printf("%u bytes.",sizeof(char)); }
To learn more, visit C operators.
Output
1 bytes.
The register keyword creates register variables which are much faster than normal variables.
register int var1;
static
The static keyword creates static variable. The value of the static variables persists until the end of the program. For example:
static int var;
struct
The struct keyword is used for declaring a structure. A structure can hold variables of different types under a single name.
struct student{ char name[80]; float marks; int age; }s1, s2;
To learn more, visit C structures.
typedef
The typedef keyword is used to explicitly associate a type with an identifier.
typedef float kg; kg bear, tiger;
union
A Union is used for grouping different types of variable under a single name.
union student { char name[80]; float marks; int age; }
To learn more, visit C unions.
void
The void keyword indicates that a function doesn't return any value.
void testFunction(int a) { ..... }
Here, function
testFunction( ) cannot return a value because the return type is void.
volatile
The volatile keyword is used for creating volatile objects. A volatile object can be modified in an unspecified way by the hardware.
const volatile number
Here, number is a volatile object.
Since, number is a constant variable, the program cannot change it. However, hardware can change it since it is a volatile object.
|
https://www.programiz.com/c-programming/list-all-keywords-c-language
|
CC-MAIN-2016-50
|
refinedweb
| 912
| 66.23
|
I'm doing a file reading. The contents of the file is saved into a subject node called head. In order to perform dynamic node creation, I equated the temp = *head (on line 48). Upon printing, calling print(head) on line 34, the program crashed.
I debugged it and found a segmentation fault on printing so I added line 50 and 51 for manual debugging. The contents of ctemp can be printed, but not the original node *head. Clearly it is not updated.
What's the problem with line 48? Why can't the value of head be changed? What do you think am I doing wrong?
*added txt && cpp attachments
Code:#include <iostream> #include <fstream> #include <cstring> #include <cstdlib> using namespace std; typedef struct cell { int row, bit; struct cell *next; }cell; typedef struct course{ int col; char name[3]; struct cell* rcell; struct course *next; }course; typedef struct subject{ int col; char name[3]; struct student* student; struct subject *next; }subject; typedef struct student{ int index; char name[3]; struct student *next; }student; void readfile(subject**); void print(subject*); void _free(subject**); int main(void) { subject* head = NULL; readfile(&head); print(head); _free(&head); free(head); } void readfile(subject** head) { int column = 0, row = 0, ctotal = 0; char cname[3], sname[3], c; ifstream ifile("schedule.txt"); subject *ctemp = NULL; student *stemp = NULL; if(ifile.is_open()){ ctemp = *head; while(ifile >> cname) { ctemp = new subject; strcpy(ctemp->name, cname); cout << "\nctemp " << ctemp->name << " "; cout << "\nhead "<< (*head)->name << " "; ctemp->col = column++; stemp = ctemp->student; while(ifile >> sname) { c = ifile.get(); if(c == '\n') break; stemp = new student; stemp->index = row++; strcpy(stemp->name, sname); cout << stemp->name << " "; stemp->next = NULL; stemp = stemp->next; } row = 0; ctemp->next = NULL; ctemp = ctemp->next; ctotal++; } } else { cout << "\nfile error"; } ifile.close(); } void print(subject *head) { while(head) { cout << endl << head->name << " "; while(head->student) { cout << head->student->name << " "; head->student = head->student->next; } head = head->next; } } void _free(subject** head) { subject* ctemp = NULL; student* stemp = NULL; while(*head) { ctemp = *head; while((*head)->student) { stemp = ctemp->student; (*head)->student = (*head)->student->next; stemp->next = NULL; free(stemp); } *head = (*head)->next; ctemp->next = NULL; free(ctemp); } }
|
http://cboard.cprogramming.com/cplusplus-programming/149970-struct-node-pointer-dereference-printing-bug-difficulty.html
|
CC-MAIN-2014-49
|
refinedweb
| 361
| 59.94
|
Originally posted by Marc Peabody: As good OOAD folks would do, we're hoping to share this Item xsd definition across the two services.
The problem now is that if one service needs an extra field added to Item and we change the shared definition for Item, it would effect all dependent services because they all share the same xsd type. Should we allow that? Or should Item not be changed and we instead create a different ItemWithNewField or DeepItem or similar type to be used by only the service that needs it? If we go that route, would it make more sense to name the new version of type Item something else or simply give it a different namespace?).
Originally posted by Remote Reference: The ideal situation will be to achieve a single canonical representation for the entity (within a distinct business domain) on which general consensus can be agreed to by multiple systems requesting that entity.
... In fact, when object-orientation became mainstream, having a common business object model (BOM) became a general goal. But, it turned out that this approach was a recipe for disaster for large systems. The first reason for the disaster was an organizational one: it was simply not possible to come to an agreement for harmonized types ... Either you didn't fulfill all interests, or your model became became far too complicated, or it simply was never finished. This is a perfect example of "analysis paralysis": if you try to achieve perfection when analyzing all requirements, you'll never finish the job. ... different systems enhance differently. Say you create a harmonized data type for customers. Later, a billing system might need two new customer attributes to deal with different tax rates, while a CRM system might introduce new forms of electronic addresses, and an offering system might need attributes to deal with privacy protection. If a customer data type is shared among all your systems (including systems not interested in any one of these extensions), all the systems will have to be updated accordingly to reflect each change, and the customer data type will become more and more complicated. Sooner or later, the price of harmonization becomes too high. Keeping all the systems in sync is simply too expensive in terms of time and money. ... Common BOMs do not scale because they lead to a coupling of systems that is too tight. As a consequence, you have to accept the fact that data types on large distributed systems will not be harmonized. In decoupled large systems, data types differ. ... if data types are not harmonized, you need data type mappings (which include technical and semantic aspects). Although mapping adds complexity, it is a good sign in a large systems because it demonstrates that components are decoupled. ... Note that a service consumer should avoid using the provider's data types in it's own source code. Instead the consumer should have a thin mapping layer to map the provider's data types to its own data types. ... ... Having no common business data model has pros and cons: The advantage is that systems can modify their data types without directly affecting other systems (modified service interfaces affect only corresponding consumers) The drawback is that you have to map data types from one sytems to another. ... To promote loose coupling, fundamental datatype harmonized for all services should usually be very basic. The most complicated common data type I've seen a phone company introduce in a SOA landscape was a data type for a phone number (a structure/record of country code, area code, and local number). The trial to harmonize a common type of address (customer address, invoice addresses, etc.) failed {how to deal with titles of nobility? disparate constraints of different systems and tool to process and print addresses}. ... If you are able to harmonize, do it. Harmonization helps. However don't fall into the trap of requiring that data types be harmonized. This approach doesn't scale. {To deal with differences between multiple related types (e.g. addresses) for consumers} ... introduce a composed service that allows you to query and modify addresses. The service then deals with the differences between the backend systems by mapping the data appropriately. ...
I would be very interested to know the various alternative ways how the web service stack can be made to tolerate the extra elements. I perceive this as a transformation layer between the canonical and the special message at the service end point for that specific client.
Originally posted by Marc Peabody:.
Originally posted by Marc Peabody: I believe I picked up from both of you that adding optional elements to a type can be a nice way of using the same type across multiple endpoints that communicate with slightly different versions of essentially the same data type..
These two xsd design strategies, when contrasted, present a major tradeoff between xsd reusability and (human) clarity. It would be great to have the best of both worlds: 1) An enterprise-wide harmony type that defines all possible elements, most of which are optional. This helps to standardize naming. 2) A service-specific type that adheres to the naming of its harmony type but lists only the elements that the service supports. This helps to communicate to the client exactly what the service supports.
Originally posted by Bob Jake: Forgetting harmonization will be at antithesis to the SOA philosophy.
Using multiple schema however poses an interesting problem.
|
http://www.coderanch.com/t/422681/Web-Services/java/Managing-xsd-types-multiple-service
|
CC-MAIN-2014-15
|
refinedweb
| 910
| 52.49
|
I have a question regardin this Zone file from wikipedia.com "
$ORIGIN example.com. ; designates the start of this zone file in the namespace
"
Why is these lines necessary?:
example.com. NS ns ; ns.example.com is a nameserver for example.com
ns A 192.0.2.2 ; IPv4 address for ns.example.com
We already know that the ns.example.com is the default name server, right? Then why should we specify its IP-address? Isn't this zone file in the ns.example.com name server, I mean if we are looking at it why do we want the ip-address of the server that has the file we are already looking at?
The information presented by the authoritative name server must be consistent with information presented elsewhere.
In this case, a resolver has used a glue record to find this name server. The delegation and name information found here must match the delegation from the TLD (the NS record in the example), and the host information found here must match what was provided as extra information by the TLD, the glue record (the A record in the example).
NS
|
http://serverfault.com/questions/313580/zone-file-explaination-dns
|
crawl-003
|
refinedweb
| 192
| 68.16
|
Overview of Outlook
The.
There are several benefits to using Outlook Anywhere to enable Outlook 2003 and Outlook 2007 clients to access your Exchange messaging infrastructure. The benefits are as follows:
Remote access to Exchange servers from the Internet.
You can use the same URL and namespace that you use for Microsoft Exchange ActiveSync and Outlook Web Access.
You can use the same Secure Sockets Layer (SSL) server certificate that you use for both Outlook Web Access and Exchange ActiveSync.
Unauthenticated requests from Outlook cannot access Exchange servers.
Clients must trust the certification authority that issues the certificate.
You do not have to use a virtual private network (VPN) to access Exchange servers across the Internet.
You must allow only port 443 through your firewall, because Outlook requests use HTTP over SSL. If you already use Outlook Web Access with SSL or Exchange ActiveSync with SSL, you do not have to open any additional ports from the Internet..
|
https://technet.microsoft.com/en-us/library/bb123741(v=exchg.80).aspx
|
CC-MAIN-2017-39
|
refinedweb
| 158
| 55.44
|
Unigram tokenization
The Unigram algorithm is often used in SentencePiece, which is the tokenization algorithm used by models like AlBERT, T5, mBART, Big Bird, and XLNet.
💡 This section covers Unigram in depth, going as far as showing a full implementation. You can skip to the end if you just want a general overview of the tokenization algorithm.
Training algorithm
Compared to BPE and WordPiece, Unigram works in the other direction: it starts from a big vocabulary and removes tokens from it until it reaches the desired vocabulary size. There are several options to use to build that base vocabulary: we can take the most common substrings in pre-tokenized words, for instance, or apply BPE on the initial corpus with a large vocabulary size.
At each step of the training, the Unigram algorithm computes a loss over the corpus given the current vocabulary. Then, for each symbol in the vocabulary, the algorithm computes how much the overall loss would increase if the symbol was removed, and looks for the symbols that would increase it the least. Those symbols have a lower effect on the overall loss over the corpus, so in a sense they are “less needed” and are the best candidates for removal.
This is all a very costly operation, so we don’t just remove the single symbol associated with the lowest loss increase, but the ( being a hyperparameter you can control, usually 10 or 20) percent of the symbols associated with the lowest loss increase. This process is then repeated until the vocabulary has reached the desired size.
Note that we never remove the base characters, to make sure any word can be tokenized.
Now, this is still a bit vague: the main part of the algorithm is to compute a loss over the corpus and see how it changes when we remove some tokens from the vocabulary, but we haven’t explained how to do this yet. This step relies on the tokenization algorithm of a Unigram model, so we’ll dive into this next.
We’ll reuse the corpus from the previous examples:
("hug", 10), ("pug", 5), ("pun", 12), ("bun", 4), ("hugs", 5)
and for this example, we will take all strict substrings for the initial vocabulary :
["h", "u", "g", "hu", "ug", "p", "pu", "n", "un", "b", "bu", "s", "hug", "gs", "ugs"]
Tokenization algorithm
A Unigram model is a type of language model that considers each token to be independent of the tokens before it. It’s the simplest language model, in the sense that the probability of token X given the previous context is just the probability of token X. So, if we used a Unigram language model to generate text, we would always predict the most common token.
The probability of a given token is its frequency (the number of times we find it) in the original corpus, divided by the sum of all frequencies of all tokens in the vocabulary (to make sure the probabilities sum up to 1). For instance,
"ug" is present in
"hug",
"pug", and
"hugs", so it has a frequency of 20 in our corpus.
Here are the frequencies of all the possible subwords in the vocabulary:
("h", 15) ("u", 36) ("g", 20) ("hu", 15) ("ug", 20) ("p", 17) ("pu", 17) ("n", 16) ("un", 16) ("b", 4) ("bu", 4) ("s", 5) ("hug", 15) ("gs", 5) ("ugs", 5)
So, the sum of all frequencies is 210, and the probability of the subword
"ug" is thus 20/210.
✏️ Now your turn! Write the code to compute the the frequencies above and double-check that the results shown are correct, as well as the total sum.
Now, to tokenize a given word, we look at all the possible segmentations into tokens and compute the probability of each according to the Unigram model. Since all tokens are considered independent, this probability is just the product of the probability of each token. For instance, the tokenization
["p", "u", "g"] of
"pug" has the probability:
Comparatively, the tokenization
["pu", "g"] has the probability:
so that one is way more likely. In general, tokenizations with the least tokens possible will have the highest probability (because of that division by 210 repeated for each token), which corresponds to what we want intuitively: to split a word into the least number of tokens possible.
The tokenization of a word with the Unigram model is then the tokenization with the highest probability. In the example of
"pug", here are the probabilities we would get for each possible segmentation:
["p", "u", "g"] : 0.000389 ["p", "ug"] : 0.0022676 ["pu", "g"] : 0.0022676
So,
"pug" would be tokenized as
["p", "ug"] or
["pu", "g"], depending on which of those segmentations is encountered first (note that in a larger corpus, equality cases like this will be rare).
In this case, it was easy to find all the possible segmentations and compute their probabilities, but in general it’s going to be a bit harder. There is a classic algorithm used for this, called the Viterbi algorithm. Essentially, we can build a graph to detect the possible segmentations of a given word by saying there is a branch from character a to character b if the subword from a to b is in the vocabulary, and attribute to that branch the probability of the subword.
To find the path in that graph that is going to have the best score the Viterbi algorithm determines, for each position in the word, the segmentation with the best score that ends at that position. Since we go from the beginning to the end, that best score can be found by looping through all subwords ending at the current position and then using the best tokenization score from the position this subword begins at. Then, we just have to unroll the path taken to arrive at the end.
Let’s take a look at an example using our vocabulary and the word
"unhug". For each position, the subwords with the best scores ending there are the following:
Character 0 (u): "u" (score 0.171429) Character 1 (n): "un" (score 0.076191) Character 2 (h): "un" "h" (score 0.005442) Character 3 (u): "un" "hu" (score 0.005442) Character 4 (g): "un" "hug" (score 0.005442)
Thus
"unhug" would be tokenized as
["un", "hug"].
✏️ Now your turn! Determine the tokenization of the word
"huggun", and its score.
Back to training
Now that we have seen how the tokenization works, we can dive a little more deeply into the loss used during training. At any given stage, this loss is computed by tokenizing every word in the corpus, using the current vocabulary and the Unigram model determined by the frequencies of each token in the corpus (as seen before).
Each word in the corpus has a score, and the loss is the negative log likelihood of those scores — that is, the sum for all the words in the corpus of all the
-log(P(word)).
Let’s go back to our example with the following corpus:
("hug", 10), ("pug", 5), ("pun", 12), ("bun", 4), ("hugs", 5)
The tokenization of each word with their respective scores is:
"hug": ["hug"] (score 0.071428) "pug": ["pu", "g"] (score 0.007710) "pun": ["pu", "n"] (score 0.006168) "bun": ["bu", "n"] (score 0.001451) "hugs": ["hug", "s"] (score 0.001701)
So the loss is:
10 * (-log(0.071428)) + 5 * (-log(0.007710)) + 12 * (-log(0.006168)) + 4 * (-log(0.001451)) + 5 * (-log(0.001701)) = 169.8
Now we need to compute how removing each token affects the loss. This is rather tedious, so we’ll just do it for two tokens here and save the whole process for when we have code to help us. In this (very) particular case, we had two equivalent tokenizations of all the words: as we saw earlier, for example,
"pug" could be tokenized
["p", "ug"] with the same score. Thus, removing the
"pu" token from the vocabulary will give the exact same loss.
On the other hand, removing
"hug" will make the loss worse, because the tokenization of
"hug" and
"hugs" will become:
"hug": ["hu", "g"] (score 0.006802) "hugs": ["hu", "gs"] (score 0.001701)
These changes will cause the loss to rise by:
- 10 * (-log(0.071428)) + 10 * (-log(0.006802)) = 23.5
Therefore, the token
"pu" will probably be removed from the vocabulary, but not
"hug".
Implementing Unigram
Now let’s implement everything we’ve seen so far in code. Like with BPE and WordPiece, this is not an efficient implementation of the Unigram algorithm (quite the opposite), but it should help you understand it a bit better.
We will use the same corpus as before as an example:
corpus = [ "This is the Hugging Face Course.", "This chapter is about tokenization.", "This section shows several tokenizer algorithms.", "Hopefully, you will be able to understand how they are trained and generate tokens.", ]
This time, we will use
xlnet-base-cased as our model:
from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("xlnet-base-cased")
Like for BPE and WordPiece, we begin by counting the number of occurrences of each word in the corpus:
from collections import defaultdict word_freqs = defaultdict(int) for text in corpus: words_with_offsets = tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(text) new_words = [word for word, offset in words_with_offsets] for word in new_words: word_freqs[word] += 1 word_freqs
Then, we need to initialize our vocabulary to something larger than the vocab size we will want at the end. We have to include all the basic characters (otherwise we won’t be able to tokenize every word), but for the bigger substrings we’ll only keep the most common ones, so we sort them by frequency:
char_freqs = defaultdict(int) subwords_freqs = defaultdict(int) for word, freq in word_freqs.items(): for i in range(len(word)): char_freqs[word[i]] += freq # Loop through the subwords of length at least 2 for j in range(i + 2, len(word) + 1): subwords_freqs[word[i:j]] += freq # Sort subwords by frequency sorted_subwords = sorted(subwords_freqs.items(), key=lambda x: x[1], reverse=True) sorted_subwords[:10]
[('▁t', 7), ('is', 5), ('er', 5), ('▁a', 5), ('▁to', 4), ('to', 4), ('en', 4), ('▁T', 3), ('▁Th', 3), ('▁Thi', 3)]
We group the characters with the best subwords to arrive at an initial vocabulary of size 300:
token_freqs = list(char_freqs.items()) + sorted_subwords[: 300 - len(char_freqs)] token_freqs = {token: freq for token, freq in token_freqs}
💡 SentencePiece uses a more efficient algorithm called Enhanced Suffix Array (ESA) to create the initial vocabulary.
Next, we compute the sum of all frequencies, to convert the frequencies into probabilities. For our model we will store the logarithms of the probabilities, because it’s more numerically stable to add logarithms than to multiply small numbers, and this will simplify the computation of the loss of the model:
from math import log total_sum = sum([freq for token, freq in token_freqs.items()]) model = {token: -log(freq / total_sum) for token, freq in token_freqs.items()}
Now the main function is the one that tokenizes words using the Viterbi algorithm. As we saw before, that algorithm computes the best segmentation of each substring of the word, which we will store in a variable named
best_segmentations. We will store one dictionary per position in the word (from 0 to its total length), with two keys: the index of the start of the last token in the best segmentation, and the score of the best segmentation. With the index of the start of the last token, we will be able to retrieve the full segmentation once the list is completely populated.
Populating the list is done with just two loops: the main loop goes over each start position, and the second loop tries all substrings beginning at that start position. If the substring is in the vocabulary, we have a new segmentation of the word up until that end position, which we compare to what is in
best_segmentations.
Once the main loop is finished, we just start from the end and hop from one start position to the next, recording the tokens as we go, until we reach the start of the word:
def encode_word(word, model): best_segmentations = [{"start": 0, "score": 1}] + [ {"start": None, "score": None} for _ in range(len(word)) ] for start_idx in range(len(word)): # This should be properly filled by the previous steps of the loop best_score_at_start = best_segmentations[start_idx]["score"] for end_idx in range(start_idx + 1, len(word) + 1): token = word[start_idx:end_idx] if token in model and best_score_at_start is not None: score = model[token] + best_score_at_start # If we have found a better segmentation ending at end_idx, we update if ( best_segmentations[end_idx]["score"] is None or best_segmentations[end_idx]["score"] > score ): best_segmentations[end_idx] = {"start": start_idx, "score": score} segmentation = best_segmentations[-1] if segmentation["score"] is None: # We did not find a tokenization of the word -> unknown return ["<unk>"], None score = segmentation["score"] start = segmentation["start"] end = len(word) tokens = [] while start != 0: tokens.insert(0, word[start:end]) next_start = best_segmentations[start]["start"] end = start start = next_start tokens.insert(0, word[start:end]) return tokens, score
We can already try our initial model on some words:
print(encode_word("Hopefully", model)) print(encode_word("This", model))
(['H', 'o', 'p', 'e', 'f', 'u', 'll', 'y'], 41.5157494601402) (['This'], 6.288267030694535)
Now it’s easy to compute the loss of the model on the corpus!
def compute_loss(model): loss = 0 for word, freq in word_freqs.items(): _, word_loss = encode_word(word, model) loss += freq * word_loss return loss
We can check it works on the model we have:
compute_loss(model)
413.10377642940875
Computing the scores for each token is not very hard either; we just have to compute the loss for the models obtained by deleting each token:
import copy def compute_scores(model): scores = {} model_loss = compute_loss(model) for token, score in model.items(): # We always keep tokens of length 1 if len(token) == 1: continue model_without_token = copy.deepcopy(model) _ = model_without_token.pop(token) scores[token] = compute_loss(model_without_token) - model_loss return scores
We can try it on a given token:
scores = compute_scores(model) print(scores["ll"]) print(scores["his"])
Since
"ll" is used in the tokenization of
"Hopefully", and removing it will probably make us use the token
"l" twice instead, we expect it will have a positive loss.
"his" is only used inside the word
"This", which is tokenized as itself, so we expect it to have a zero loss. Here are the results:
6.376412403623874 0.0
💡 This approach is very inefficient, so SentencePiece uses an approximation of the loss of the model without token X: instead of starting from scratch, it just replaces token X by its segmentation in the vocabulary that is left. This way, all the scores can be computed at once at the same time as the model loss.
With all of this in place, the last thing we need to do is add the special tokens used by the model to the vocabulary, then loop until we have pruned enough tokens from the vocabulary to reach our desired size:
percent_to_remove = 0.1 while len(model) > 100: scores = compute_scores(model) sorted_scores = sorted(scores.items(), key=lambda x: x[1]) # Remove percent_to_remove tokens with the lowest scores. for i in range(int(len(model) * percent_to_remove)): _ = token_freqs.pop(sorted_scores[i][0]) total_sum = sum([freq for token, freq in token_freqs.items()]) model = {token: -log(freq / total_sum) for token, freq in token_freqs.items()}
Then, to tokenize some text, we just need to apply the pre-tokenization and then use our
encode_word() function:
def tokenize(text, model): words_with_offsets = tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(text) pre_tokenized_text = [word for word, offset in words_with_offsets] encoded_words = [encode_word(word, model)[0] for word in pre_tokenized_text] return sum(encoded_words, []) tokenize("This is the Hugging Face course.", model)
['▁This', '▁is', '▁the', '▁Hugging', '▁Face', '▁', 'c', 'ou', 'r', 's', 'e', '.']
That’s it for Unigram! Hopefully by now you’re feeling like an expert in all things tokenizer. In the next section, we will delve into the building blocks of the 🤗 Tokenizers library, and show you how you can use them to build your own tokenizer.
|
https://huggingface.co/course/chapter6/7
|
CC-MAIN-2022-40
|
refinedweb
| 2,653
| 58.32
|
Java stack data structure interview questions and answers
Core Java Coding Questions and Answers for beginner to intermediate level
Q. Can you write a program to evaluate if a given string input has proper closing bracket for every opening bracket?
A. Firstly, think of the pseudo code. The pseudo-code goes as follows.
1. Store every opening parenthesis (i.e. a LHS parenthesis) in a stack. This will enable LIFO.
2. When you encounter a closing parenthesis (i.e. RHS parenthesis ), pop the last entry, which should be the corresponding opening parenthesis.
3. If not found, then the parentheses are not matched.
If required, draw a diagram as shown below.
/opt2/ETL/working/MAS_WRAP_BDT_DEV/script
Here is a sample program to illustrate a Stack (i.e. LIFO) in action in evaluating if a program has balanced parentheses. The enum constants class that defines the parentheses.
public enum PARENTHESIS { LP('('), RP(')'), LB('{'), RB('}'), LSB('['), RSB(']'); char symbol; PARENTHESIS(Character symbol) { this.symbol = symbol; } char getSymbol() { return this.symbol; } }
Now, the stack in action using its LIFO mechanism to marry a closing parenthesis (i.e RHS) with an opening parenthesis (i.e. LHS). If you find any LHS parenthesis, push it into a stack, and when you find a RHS parenthesis, pop the stack to see if you have a corresponding LHS parenthesis.
import java.util.ArrayDeque; import java.util.Deque; public class Evaluate { //stores the parentheses final Deque<character> paranthesesStack = new ArrayDeque<character>(); public boolean isBalanced(String s) { for (int i = 0; i < s.length(); i++) { if (s.charAt(i) == PARENTHESIS.LP.getSymbol() || s.charAt(i) == PARENTHESIS.LB.getSymbol() || s.charAt(i) == PARENTHESIS.LSB.getSymbol()) paranthesesStack.push(s.charAt(i)); // auto boxing takes place // push the opening parentheses //for each RHS parenthesis check if there is a matching LHS Parenthesis //if the stack is empty or does not have a matching LHS parenthesis then not balanced. else if (s.charAt(i) == PARENTHESIS.RP.getSymbol()){ if (paranthesesStack.isEmpty() || paranthesesStack.pop() != PARENTHESIS.LP.getSymbol()) return false; } else if (s.charAt(i) == PARENTHESIS.RB.getSymbol() ) { if (paranthesesStack.isEmpty() || paranthesesStack.pop() !=PARENTHESIS.LB.getSymbol() ) return false; } else if (s.charAt(i) == PARENTHESIS.RSB.getSymbol()) { if (paranthesesStack.isEmpty() || paranthesesStack.pop() != PARENTHESIS.LSB.getSymbol()) return false; } } return paranthesesStack.isEmpty(); //if the stack is empty, then all matched well, otherwise not matched. } }
Note: ArrayDeque is not thread-safe and does not allow for elements of the data structure to be null. The concurrent package has BlockingDeque and LinkedBlockingDequeue.
Finally, a simple JUnit class to test. You need to have the junit-xxx.jar in the classpth. The version xxx I used was 4.8.2.
import junit.framework.Assert; import org.junit.Before; import org.junit.Test; public class EvaluateTest { Evaluate eval = null; @Before public void setUp(){ eval = new Evaluate(); } @Test public void testPositiveIsBalanced(){ boolean result = eval.isBalanced("public static void main(String[] args) {}"); Assert.assertTrue(result); } @Test public void testNegativeIsBalanced(){ boolean result = eval.isBalanced("public static void main(String[ args) {}"); // missing ']' Assert.assertFalse(result); result = eval.isBalanced("public static void main(String[] args) }"); // missing '{' Assert.assertFalse(result); result = eval.isBalanced("public static void main String[] args) {}"); // missing '(' Assert.assertFalse(result); } }
Tip: When testing, test positive and negative scenarios.
Note: The above example is shown to illustrate how a LIFO mechanism can be used to determine if the parentheses are balanced. The actual implementation is far from being optimum. Where ever there is a large block of if-else or switch statements, one should think about an object oriented solution.
Q. Why Deque interface is different from other collection classes?
A. In Deque, You can insert and delete the objects from the both start and end of the the collection. Whereas in a normal collection, inserts/deletes are happening at last only.
Q. What are the different ways to look at the trace of your program execution?
A.
1. Java is a stack based language, and the program execution is pushed and popped out of a stack. When a method is entered into, it is pushed into a stack, and when that method invokes many other methods, they are pushed into the stack in the order in which they are executed. As each method completes its execution, they are popped out of the stack in the LIFO order. Say methodA( ) invoked methodB( ), and methodB( ) invoked methodC ( ), when execution of methodC( ) is completed, it is popped out first, and then followed by methodB( ) and then methodA( ). When an exception is thrown at any point, a stack trace is printed for you to be able to find where the issue is.
2. A Java developer can access a stack trace at any time. One way to do this is to call
Thread.currentThread().getStackTrace() ; //handy for tracing
You could get a stack trace of all the threads using the Java utilities such as jstack, JConsole or by sending a kill -quit signal (on a Posix operating system) or <ctrl><break> on WIN32 platform to get a thread dump. You could also use the JMX API as shown below. ThreadMXBean is the management interface for the thread system of the JVM.
ThreadMXBean bean = ManagementFactory.getThreadMXBean(); ThreadInfo[] infos = bean.dumpAllThreads(true, true); for (ThreadInfo info : infos) { StackTraceElement[] elems = info.getStackTrace(); //...do something }
The thread dumps are very useful in identifying concurrency issues like dead locks, contention issues, thread starvation, etc.
Q. Is recursive method calls possible in Java?
A. Yes. Java is stack based and because of its LIFO (Last In First Out) property, it remembers its 'caller'. So it knows whom to return when the method has to return.
Q. How would you go about analyzing stack traces correctly?
A.
1. One of the most important concepts of correctly understanding a stack trace is to recognize that it lists the execution path in reverse chronological order from most recent operation to earliest operation. That is, it is LIFO.
2. The stack trace below is simple and it tells you that the root cause is a NullPointerException on ClassC line 16. So you look at the top most class.
Exception in thread "main" java.lang.NullPointerException at com.myapp.ClassC.methodC(ClassC.java:16) at com.myapp.ClassB.methodB(ClassB.java:25) at com.myapp.ClassA.main(ClassA.java:14)
3. The stack trace can get more complex with multiple "caused by" clauses, and in this case you usually look at the bottom most "caused by". For example,
Exception in thread "main" java.lang.IllegalStateException: ClassC has a null property at com.myapp.ClassC.methodC(ClassC.java:16) at com.myapp.ClassB.methodB(ClassB.java:25) at com.myapp.ClassA.main(ClassA.java:14) Caused by: com.myapp.MyAppValidationException at com.myapp.ClassB.methodB(ClassB.java:25) at com.myapp.ClassC.methodC(ClassC.java:16) ... 1 more Caused by: java.lang.NullPointerException at com.myapp.ClassC.methodC(ClassC.java:16) ... 1 more
The root cause is the last "caused by", which is a NullPointerException on ClassC line 16
4. When you use plethora of third-party libraries like Spring, Hibernate, etc, your stack trace's "caused by" can really grow and you need to look at the bottom most "caused by" that has the package relevant to you application like com.myapp.ClassC and skip library specific ones like org.hibernate.exception.*.
Q. Can you reverse the following numbers {1, 4, 6, 7, 8, 9}?
A. There are a number of ways to achieve this. Speaking of LIFO, the following example illustrates using a stack based implementation.
import java.util.ArrayDeque; import java.util.Arrays; import java.util.Deque; public class ReverseNumbers { public static void main(String[] args) { Integer[] values = {1, 4, 6, 7, 8, 9}; Deque<integer> numberStack = new ArrayDeque<integer>(10); for (int i = 0; i < values.length; i++) { numberStack.push(values[i]); // push the numbers in given order } Integer[] valuesReversed = new Integer[values.length]; int i = 0; while (!numberStack.isEmpty()) { valuesReversed[i++] = numberStack.pop(); // pop it - as this happens in reverse order // i++ is a post increment i.e. // assign it and then increment it // for the next round } System.out.println(Arrays.deepToString(valuesReversed)); } }
The output will be
[9, 8, 7, 6, 4, 1]
2 Comments:
thanks this is really useful and easy to understand
Thanks
Links to this post:
Create a Link
|
http://java-success.blogspot.com.au/2012/04/java-stack-data-structure-interview.html
|
CC-MAIN-2018-09
|
refinedweb
| 1,375
| 51.85
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.