Document
stringlengths
395
24.5k
Source
stringclasses
6 values
How to use the camera capture task for Windows Phone 8 Applies to: Windows Phone 8 and Windows Phone Silverlight 8.1 | Windows Phone OS 7.1 Use the camera capture task to enable users to take a photo from your application using the built-in Camera application. If the user completes the task, an event is raised and the event handler receives a photo in the result. On Windows Phone 8, if the user accepts a photo taken with the camera capture task, the photo is automatically saved to the phone’s camera roll. On previous versions of Windows Phone, the photo is not automatically saved. Photos captured with the CameraCaptureTask API are always copied to the phone’s camera roll. If the customer has set up his or her phone for automatic uploads, these pictures will be copied to OneDrive and potentially shared with a broader audience than intended by your app. For this reason, if your app captures pictures that you explicitly do not want to be shared or uploaded, such as temporary images or images that contain private information, do not use the CameraCaptureTask API. Instead, you should use the PhotoCamera API to implement your own camera UI. For more information on creating camera apps, see How to create a base camera app for Windows Phone 8. By using Choosers, you help provide a consistent user experience throughout the Windows Phone platform. For more information, see Launchers and Choosers for Windows Phone 8. Memory allocated for the Camera Capture Task does not count toward total application memory use. This helps minimize the amount of memory your application uses to capture photos, which is particularly important when your application runs on a lower-memory device. For more information, see Developing apps for lower-memory phones for Windows Phone 8. When testing the camera capture task using the emulator, press F7 while the task is active to capture a photo. To use the camera capture task Add the following statement to your code. Declare the task object. It must have page scope, so declare it in your page before the constructor. Add the following code to your page constructor. This code initializes the task object, and identifies the method to run after the user completes the task. Add the following code to your application wherever you need it, such as in a button click event. To test this procedure, you can put the code in the page constructor. This is the code to launch the task. Add the code for the completed event handler to your page. This code runs after the user completes the task. The result is a PhotoResult object that exposes a stream containing the image data. For information about working with photo image streams, see Camera and photos for Windows Phone 8. Private Sub cameraCaptureTask_Completed(sender As Object, e As PhotoResult) If e.TaskResult = TaskResult.OK MessageBox.Show(e.ChosenPhoto.Length.ToString()) 'Code to display the photo on the page in an image control named myImage. 'Dim bmp as System.Windows.Media.Imaging.BitmapImage = new System.Windows.Media.Imaging.BitmapImage() 'bmp.SetSource(e.ChosenPhoto) 'myImage.Source = bmp End If End Sub
OPCFW_CODE
- A compatible computer: Not every computer will work with Mac OS X, even with the help of myHack. Be sure to read the Hackintosh compatibility guide very carefully, to check whether or not your computer qualifies. The hardware requirements for OS X Mavericks are essentially identical to those for OS X Mountain Lion; AMD processors and older 32-bit Intel processors (such as Pentium M) are not supported. If your computer already has OS X Mountain Lion installed, myHack will just update Mountain Lion to Mavericks normally, without deleting any of your apps or files. - An empty hard drive partition: Mac OS X needs its own hard drive partition (a minimum of 10 GB of space is required, but at least 50 GB of space is recommended). - myHack (Free): myHack is a Mac program that modifies the official OS X Mavericks DP1 installer, and writes it onto a USB drive. You can then use this myHack USB drive to run the Mavericks installer on a PC. myHack works with Mac OS X Snow Leopard and newer. - A Hackintosh with Snow Leopard/Lion/Mountain Lion already installed, a real Mac, or a Mac OS X virtual machine: myHack is a Mac app, so you need a computer with Mac OS X to run it. You could use a real Mac, if you own one. Alternatively, you could install Mac OS X Snow Leopard on your PC, and then follow this guide to install Mavericks (if your computer uses an Ivy Bridge processor, be sure to use iBoot for Ivy Bridge). - As one last option, you could install Mountain Lion on a virtual machine, and run Unibeast on there instead. Be sure to install the Virtualbox Extension Pack to view USB drives from your virtual machine. - OS X 10.9 Mavericks: The method used by this guide requires that you have a copy of the Mavericks installation app. It's available from Apple's Developer Program, which has a membership fee of $99 a year. Of course, you could always download it from bittorrent, too. Make sure that your downloaded copy is either called "Install OS X 10.9 Developer Preview.app" or "InstallESD.dmg" (myHack works with either version). - An empty USB drive (8 GB or larger): The USB drive used for myHack must be at least 8 GB in size. Since myHack will erase all of the files on your USB drive, make sure to back up its contents first. You can reuse this USB drive for normal stuff after you finish installing Mavericks. - Multibeast (Free): Multibeast is a collection of kext files that your Hackintosh will need to run properly, after the initial installation. Download it onto a USB drive. Be sure to download the newest version 5 of Multibeast, not the older versions 3 or 4 (which are for Snow Leopard and Lion, respectively).
OPCFW_CODE
The tao of sexual dating for men eroticvideosfor dott com speed dating gainesville ga It makes it easy to synthesize and show accurately controlled visual and auditory stimuli and interact with the observer. ” and its Service Pack 1 update was released at 14th April 2016. The release tag is “PTB_Beta-2016-04-14_V3.0.12”, with the full tree and commit logs under the URL: Release: https://github.com/Psychtoolbox-3/Psychtoolbox-3/tree/PTB_Beta-2016-04-14_V3.0.12 The new BETA “Nothing but the rain” and its Service Pack 1 update was released at 15th March 2016. Sydney bristow relationships dating WXnation: Omaha, Nebraska Webcams - numerous free, live Omaha webcam images and streaming webcams, and select traffic cams and your local forecast. Bryan steeksma atheist dating hidden camera at massage parlor | - incorporée · Hehe, for many tourists just a massage isn't good enough. When I was in Thailand for holidays, I had sex with prostitutes too. Dublin ireland dating sites Sexy Steph's Homepage on - Steph Profile Headline: ... The survey has a mandatory part of six questions, important for funding decisions and PTB’s future, and an optional longer part of optional questions, mostly to determine how you use PTB, what your needs and wishes are, what hardware/software you use. The release tag is “PTB_Beta-2016-01-25_V3.0.12”, with the full tree and commit logs under the URL: https://github.com/Psychtoolbox-3/Psychtoolbox-3/tree/PTB_Beta-2016-01-25_V3.0.12 The new BETA “Puking Rainbow Unicorn SP1” was released at 7th December 2015. As usual, the complete development history can be found in our Git Hub repository. The release tag is “PTB_Beta-2017-08-19_V3.0.14”, with the full tree and commit logs under the URL: https://github.com/Psychtoolbox-3/Psychtoolbox-3/tree/PTB_Beta-2017-08-19_V3.0.14 Psychtoolbox 3.0.14 “Kalsarikännit” was released at 11th June 2017. Classy, flirty, free, the ultimate live webcam babe site. Cross racial dating statistics Gorleston Tourism Information 2017 | - & Visit Gorleston in Norfolk this year. Temperamentos transformados online dating Streamed In High Definition, I Come From - am a girl and my model nickname at Chaturbate is Soft_body, I'm 25 years old and I was born December 1st 1991. I prefer not to have your mic on, but I'll gladly turn your cam on to watch ...Tags: Adult Dating, affair dating, sex dating
OPCFW_CODE
package contact; /*Contact Service Requirements The contact service shall be able to add contacts with a unique ID. The contact service shall be able to delete contacts per contact ID. The contact service shall be able to update contact fields per contact ID. The following fields are update able: -firstName -lastName -phone number -Address */ // Professor Wilson ignore the print outs-- I like to see something printed // so I know what and where things are happening. import java.util.ArrayList; public class ContactService { /* this contains a list of contacts */ private ArrayList<ContactClass> contacts; /* the default constructor */ public ContactService() { contacts = new ArrayList<>(); } /* method adds contact to array list if its not already present */ public boolean add(ContactClass contact) { /* Is contact present */ boolean alreadyPresent = false; for (ContactClass c : contacts) { if (c.equals(contact)) { alreadyPresent = true; } } /* if not present then we add contact, and return true */ if (!alreadyPresent) { contacts.add(contact); System.out.println("Contact Added Successfully!"); return true; } else { System.out.println("Contact already present"); return false; } } /* method removes a contact with given contactId if present */ public boolean remove(String ID) { for (ContactClass c : contacts) { if (c.getID().equals(ID)) { contacts.remove(c); System.out.println("Contact removed Successfully!"); return true; } } System.out.println("Contact not present"); return false; } /* * this method updates the contact of the ID, if found updates it * passes empty string if certain id attributes stay same */ public boolean update(String ID, String firstName, String lastName, String phoneNumber, String Address) { for (ContactClass c : contacts) { if (c.getID().equals(ID)) { if (!firstName.equals("")) c.setFirstName(firstName); if (!lastName.equals("")) c.setLastName(lastName); if (!Address.equals("")) c.setAddress(phoneNumber); if (!Address.equals("")) c.setAddress(Address); System.out.println("Contact updated Successfully!"); return true; } } System.out.println("Contact not present"); return false; } }
STACK_EDU
"initial value in 'vmmin' is not finite" error in flexsurvspline call When I run flexsurvspline, from the flexsurv package, on the attached dataset , I get the following error for any k>1: flexsurvspline(Surv(time,dead)~1,data=input_df,k=2) Error in optim(method = "BFGS", par = c(gamma0 = 0, gamma1 = 0, gamma2 = 0, : initial value in 'vmmin' is not finite From reading other posts with similar issues, I gather this is likely an issue with the inits parameter, and that I might have to generate my own inits function for this particular dataset. But I haven't found any guidance on how to do this for spline fits or what goes into that parameter. Am I correct that this is what's causing the error, and if so how should an inits function be determined? input_df.xlsx Thanks in advance! If you use the debugger you can see what's going on inside the function. By specifying debugonce(flexsurvspline), and then running the function, you can step through and see what's happening on each step. (NB: capital Q gets you out of the debugging browser). When I did that, I found that the because more than 30% of the data is in the first time (2), which is also the left boundary know, the function is setting the knots at: Browse[2]> knots 33.33333% 66.66667% 0.6931472 0.6931472 1.0986123 2.5649494 Note that the first two knots are at the same value, which essentially breaks the function. You can fix this by specifying the interior knot values directly on the log-time scale. flexsurvspline(Surv(time,dead)~1,data=input_df, knots=c(1.8,2.3)) # Call: # flexsurvspline(formula = Surv(time, dead) ~ 1, data = input_df, # knots = c(1.8, 2.3)) # # Estimates: # est L95% U95% se # gamma0 -3.475 -3.783 -3.167 0.157 # gamma1 3.021 2.785 3.257 0.120 # gamma2 4.294 3.341 5.248 0.487 # gamma3 -7.908 -10.305 -5.510 1.223 # # N = 428, Events: 428, Censored: 0 # Total time at risk: 1401 # Log-likelihood = -725.7712, df = 4 # AIC = 1459.542 Thanks for the explanation! Adjusting the 33.3% knot to 0.693148 gives me lower AIC values than a 1-knot spline. Is there any disadvantage to specifying custom interior knot values? Not really. As you increase the flexibility of the model you have to be more careful about overfitting.
STACK_EXCHANGE
In reading research, a question of enormous practical (and theoretical) importance is: Why do some children of adequate language skills, intelligence and educational opportunities lag behind their peers in their reading ability, or more simply put: What causes reading problems? This question has been tackled for decades, and the answer has proved to be incredibly complex. (There is still no agreed-upon answer to date.) Experimentally, finding a causal influence of a behavioural outcome is somewhat tricky. Paraphrasing my undergraduate statistics textbook: there are three points that you need to show before claiming causality. (1) There is a correlation between the outcome measure (e.g., reading ability) and performance on the task which is proposed to cause the variability therein (say, phonological awareness). (2) The causal influence precedes the skill that it’s supposed to test (e.g., phonological awareness at an earlier time point is associated with reading at a later time point). (3) Experimentally manipulating the causing variable should affect performance on the outcome measure (e.g., children become better readers if you train them on a phonological awareness task). There are two statistical procedures which are commonly used, in reading research, to show a causal relationship, even though they completely ignore Point (3). An experimental manipulation is essential for making a causal claim: both Points (1) and (2) are susceptible to the alternative explanation that a third factor influences both measures. For example, phonological awareness, even if measured before the onset of reading instruction, may be linked to reading ability in Grade 3, but both of them may be caused by vocabulary knowledge, parental involvement in their children’s education, the child’s intelligence, or statistical learning ability, just to name a few possibilities. Most researchers know that correlation ≠ causation, but many seem to succumb to the temptation of inferring causation from structural equation models (SEMs). Paraphrasing my undergraduate statistics lecturer: SEM is a way of rearranging correlations in such a way that makes it look like you can infer causality. Here, the outcome measure and predictors are represented as boxes, and the unique variance of the particular link, obtained by a regression analysis, is written next to each arrow going from a predictor to the outcome measure. Even if a predictor is measured at an earlier time than the outcome measure (thus showing precedence, as per Point 2), this fails to show a causal relationship, as a third, unmeasured factor could be causing both. Having just returned from a selective summer school on literacy, I have counted a total of four statements inferring a causal relationship from SEMs during this meeting, one by a prominent professor. They are in good company. Just to pick one example, a recent paper has used SEMs to infer causation (Hulme, Bowyer-Crane, Carroll, Duff, & Snowling, 2012)1. While I’m at it, there is another methodological method that has been used to infer causation even though it can’t, namely reading age matched designs. The logic is as follows: if you compare poor readers to good readers, who are matched on age, on any task (say, phonological awareness), you can expect poor readers to perform worse than good readers. This could be because being skilled at this task facilitates learning to read, or performance on this task could be a result of more reading exposure among the good readers (because good readers tend to read in their free time, while poor readers don’t). In a reading age matched design, one compares a group of poor readers, given their age, to a group of younger readers, who are average or good given their age, but their absolute reading ability is equivalent to that of the poor readers. If poor readers perform worse on phonological awareness tasks than their younger controls, this suggests that the deficit in phonological awareness is not a result of a lack of reading exposure. There are theoretical problems in matching children for their absolute reading ability, because older poor readers and younger average-good readers are unlikely to have identical performance on different aspects of reading (see Jackson & Coltheart, 2001): the control group could vary widely in their age and cognitive skill profiles, depending whether the task to match them measures their nonword reading accuracy, their word reading fluency, or text comprehension. Even if it was theoretically possible to match poor readers to younger controls in terms of their reading ability, the caveats from SEMs still apply: it is possible that poor phonological awareness and poor reading skills are both caused by a third underlying factor. Although I know of no peer-reviewed paper that explicitly makes a causal claim based on the reading age matched design, I have heard such claims at conference talks, and causality is often implied in published papers, without explicitly stating the alternative explanation. The TL;DR summary of this post is very simple: It is never OK to infer causality from correlations. Hulme, C., Bowyer-Crane, C., Carroll, J. M., Duff, F. J., & Snowling, M. J. (2012). The Causal Role of Phoneme Awareness and Letter-Sound Knowledge in Learning to Read: Combining Intervention Studies With Mediation Analyses. Psychological Science, 23(6), 572-577. doi:10.1177/0956797611435921 Jackson, N., & Coltheart, M. (2001). Routes to reading success and failure: Toward an integrated cognitive psychology of atypical reading. New York, NY: Psychology Press. To be fair, this study also has a training component. Whether the paper makes a convincing claim for a causal relationship is a different question, but either way someone who only has a quick read of the title and abstract may get the impression that SEMs are a tool for assessing causality.
OPCFW_CODE
If the half width at half maximum (HWHM) of the PSF is safely larger than the pixel size (as for XMM/EPIC-MOS for instance), then it is possible to use Eqs. (2-3) locally, and estimate pile-up at any point in the PSF. Note that the formulae for monopixels, which are the most important as is shown next, depend on the incoming X-ray flux at distance pixel at most. For XMM/EPIC-MOS it is expected that the HWHM will be about 4 pixels. Therefore for a source centred on a pixel, the flux (per pixel) in the eight neighbouring pixels will be about 10% lower than that in the central pixel, so that formulae (1-3) will overestimate pile-up there. But the central pixel receives no more than 1% of the total source flux, and for other pixels the problem is much less severe. The opposite case of pixels comparable to the HWHM of the PSF is discussed in Sect. 4.4. |Figure 4: Expected radial distribution of the monopixel count rate (solid line), as opposed to the distribution one would get in the absence of pile-up (dashed line) and that of clean (not piled-up) monopixels (dotted line). The point spread function (PSF) follows a King profile (g(r) defined in (B1)) with and r0 = 5 (diameter at half maximum = 7.2, at half encircled energy = 13.5 pixels). The total input source flux is 100 photons/frame. The pattern distribution is [0.778, 0.195, 0.014, 0.013]. The global flux loss is 61.42%. The global pile-up rate is 1.63%| Figure 4 shows an example of the expected count rate for an axisymmetric PSF with radial profile representative of the XMM/EPIC-MOS, a low-energy pattern distribution (the relevant pattern distribution is that averaged over energy, which is dominated by the more numerous low-energy photons) and a large source flux. What is important to note is that even though the flux loss is large (61.42%) as expected for such a bright source, the pile-up fraction remains very modest at 1.63%. This is because the probability of a true monopixel pile-up is much less (only 1 pixel target area) than that of destroying the monopixel (8 pixels target area for other monopixels, and more for bi-, tri- and quadripixels). Adding the diagonal monopixels (Eqs. A3-A4) reduces the flux loss to 58.56%, but increases the pile-up fraction to 2.02%. Counting as good events monopixels touching some other pattern by a corner (Eq. (A5)) reduces further the flux loss to 51.80%, while increasing the pile-up fraction to 2.76%. Note that the figures given here are for integration to 1000 pixels distance, but they do not change a lot if it is performed to 50 pixels (10 r0) only (4.5% of the flux falls beyond 50 pixels). |Figure 5: Same as Fig. 4 for bipixel events, using formulae (16-17). The global flux loss is 57.15%. The global pile-up rate is 21.30%, much larger than for monopixels because the clean bipixel events are dominated near the center by adjacent monopixel events counted as one bipixel event| Figure 5 shows the same plot for bipixels. In this case the flux loss is comparable (57%), but the pile-up fraction is much higher (21%). This is because for this (typical) pattern distribution there are four times more monopixels than bipixels, thus leading to a large fraction of adjacent monopixels being counted as bipixels. The figures are somewhat similar for tripixels and quadripixels. For a less extreme source flux, such as 10 photons/frame, the flux loss and pile-up rate for monopixels are 18.40% and 0.74%, respectively. Including diagonal events, the figures are 14.45% and 0.86%. For bipixels the flux loss is 12.65% and the pile-up rate 12.43%. The main conclusion is that flux loss and transfer of single events to bipixels or larger is the dominant effect of pile-up. For all spectral applications when pile-up is suspected to have occurred it is much safer to select only monopixel events. Diagonal events may be used, but never total up a large fraction of the true monopixels. However, rejecting all split events incurs a severe loss of effective area at high energy (a factor of 2), where the number of events may not be large even though the number of low-energy photons exceeds the pile-up limit. |Figure 6: Expectation value of the monopixel count rate (integrated over the detector, solid line) as a function of total input source flux for a PSF following a King profile with and r0 = 5. The pattern distribution is [0.778, 0.195, 0.014, 0.013]. The scale is on the left axis. The dashed line shows the input flux multiplied by (no pile-up) for comparison. The dotted line is the fraction of piled-up events (right axis). This figure shows how the pile-up fraction tends to a constant at large source flux. The fraction of diagonal events (not shown) also tends to a constant at high flux, 7.5% in this case| After integrating the local rates from equations (2-3) over space (any spatial domain is allowed, but here I consider the full space), one obtains the count rate of clean (not piled-up) monopixel events and the full expected monopixel count rate M1 as a function of the incoming source flux. I call the integral source flux (/frame) , to avoid confusion with the local flux (/pixel/frame) . This has two important consequences. First, because the expected count rate is a strictly increasing function of the input source flux it is possible to estimate quite precisely the source flux from the measured count rate, and model the piled-up PSF, similarly to "curve of growth'' analysis for absorption lines. Secondly, because the pile-up fraction among monopixel events remains small, a spectrum restricted to monopixels is not hopelessly corrupted, even for very bright sources. The detailed study of the spectral perturbations induced by pile-up is deferred to a later paper. It is also useful to know how pile-up behaves at low flux, allowing one to estimate when it can be safely ignored. Developing Eqs. (6) and (5) over to order 2, one easily obtains for the flux loss and the pile-up rate on single events: Copyright The European Southern Observatory (ESO)
OPCFW_CODE
Cannot install as Add-on Describe the bug When I try to install the add-on via by adding the repo to the add on store, I get the error message: Failed to install add-on The command '/bin/ash -o pipefail -c apk add --no-cache python3 bash py3-pip' returned a non-zero code: 2 To Reproduce: Steps to reproduce the behavior: Add repo 'https://github.com/Dielee/volvo2mqtt' in Add-on store Refresh and find the Volvo2mqtt Add-on Click on Install See error Expected behavior: I expect the Add-on to be installed Screenshots: Version info: Version core-2023.8.3 Installation Type Home Assistant Supervised Development false Supervisor true Docker true User root Virtual Environment false Python Version 3.11.4 Operating System Family Linux Operating System Version 5.10.103-v7l+ CPU Architecture armv7l Host Operating System Raspbian GNU/Linux 10 (buster) Update Channel stable Supervisor Version supervisor-2023.08.1 Docker Version 20.10.21 As I can't reproduce this issue. Please take a look at the supervisor logs from your HA instance. Maybe this will point us to the right direction. Thanks for quick reply! This is what I get from supervisor. Some kind of Python issue? 23-08-30 10:56:13 INFO (MainThread) [supervisor.host.apparmor] Adding/updating AppArmor profile: 982ee2c4_volvo2mqtt 23-08-30 10:56:13 INFO (MainThread) [supervisor.docker.addon] Starting build for 982ee2c4/armv7-addon-volvo2mqtt:1.8.6 23-08-30 10:56:16 ERROR (MainThread) [supervisor.docker.addon] Can't build 982ee2c4/armv7-addon-volvo2mqtt:1.8.6: The command '/bin/ash -o pipefail -c apk add --no-cache python3 bash py3-pip' returned a non-zero code: 2 23-08-30 10:56:16 ERROR (MainThread) [supervisor.docker.addon] Build log: Step 1/15 : ARG BUILD_FROM Step 2/15 : FROM $BUILD_FROM ---> 89386b2d7696 Step 3/15 : RUN apk add --no-cache python3 bash py3-pip ---> Running in e256ab914e75 fetch https://dl-cdn.alpinelinux.org/alpine/v3.18/main/armv7/APKINDEX.tar.gz fetch https://dl-cdn.alpinelinux.org/alpine/v3.18/community/armv7/APKINDEX.tar.gz WARNING: fetching https://dl-cdn.alpinelinux.org/alpine/v3.18/main: temporary error (try again later) WARNING: fetching https://dl-cdn.alpinelinux.org/alpine/v3.18/community: temporary error (try again later) ERROR: unable to select packages: py3-pip (no such package): required by: world[py3-pip] python3 (no such package): required by: world[python3] This WARNING: fetching https://dl-cdn.alpinelinux.org/alpine/v3.18/main: temporary error (try again later) WARNING: fetching https://dl-cdn.alpinelinux.org/alpine/v3.18/community: temporary error (try again later) looks like your HA cannot fetch data from dl-cdn.alpinelinux.org. Maybe a local DNS problem ? Thanks for pointing those out. There was no DNS issue but an issue related to that I'm running on a raspberrypi with 32bit os. I found the solution here: https://community.home-assistant.io/t/supervisor-cant-install-add-ons/301771 Thanks for pointing those out. There was no DNS issue but an issue related to that I'm running on a raspberrypi with 32bit os and this add-on tries to build a docker image? Anyway I found the solution here and could now install the add-on!: https://community.home-assistant.io/t/supervisor-cant-install-add-ons/301771 Same issue here. Can you describe what you did to fix it? There seem to be multiple different issues in the linked thread
GITHUB_ARCHIVE
the other/other I have been struggling with the understanding of the others/others. I know that'the other or the other' is used when you talk about rest of some identified group, it is like 'the remaining ones'. So e.g. here: I implemented 50 out of 100 requirements. The others will be implemented tomorrow. Is that correct? In this specific context, the difference between using the article or not is quite clear-cut... 1a: The others will be implemented tomorrow - all the others 1b: Others will be implemented tomorrow - some (but usually not all) of the others The general principle is you include the definite article the when the thing(s) being referenced have already been mentioned before, or are clearly defined in some other way for the current context (you're talking about something specific, where both speaker and audience know exactly which one(s). Applying that principle in OP's example, the only possible way the others can unambiguously identify a specific group of "requirements" is if it's all the ones that haven't already been implemented. Note that things change if the context provides more information. Consider... 2: I have high, medium, and low-priority requirements to implement. I did the high-priority ones today. 2a: The medium-priority requirements will be implemented tomorrow 2b: Medium-priority requirements will be implemented tomorrow In that case, 2a and 2b would normally both be understood as meaning the same thing (all medium-priority requirements), so you might say it's just a stylistic choice whether to include the article or not. But per @CowperKettle's comment below, it's at least possible to differentiate on the grounds that 2b is more appropriate if you mean some medium-priority requirements will be implemented tomorrow [but not all - there are a lot of these, and it will take several days to implement them all]. Also note that if we substitute rest (or other unambiguous terms such as remaining/outstanding requirements) instead of other, the article must be included, as per the general principle above. I'll add one comment for the sake of completeness: We can use the adjective other when it precedes a plural noun. So, we could offer as alternative 1c: The other requirements will be implemented tomorrow. The same usage of the applies; if we remove the word the, it means some (but probably not all) will be implemented: Other requirements will be implemented tomorrow. @J.R.: Yes - effectively, others is a "contracted form" of other requirements, but if we drop the primary noun we still need the pluralising s, so it gets attached to other. Considering your comment in light of the final sentence of my answer, we also end up with the observation that if there's only one "other" (i.e. - the referent is unambiguous), then again the article must be included. "I finished one of my two jobs today. The other will be done tomorrow." I don't understand why in your second example "Medium priority requirements" in 2b cannot mean "some, probably not all". @CowperKettle: You're quite right. I've just revisited this page because I got an upvote (yours?) before noticing I was also being pinged by a comment. And my first thought in glancing at my text above was *I seem to have tailed off without explaining why "things change". It was a while ago, so I've completely forgotten how I came to post it as is - perhaps I got called away or something. Anyway, ty for the heads up - I'll edit.
STACK_EXCHANGE
SPAM just keeps getting weirder. Several folks have been getting this SPAM in their inbox. No link, embedded anything,etc. Has the Tuna industry hired a bunch of hackers to spread the word about Tuna? If so, they could have picked ANY other recipe... This just sounds disgusting to me. Recipe: Tuna Salad 1 ea Env. Golden Onion Soup Mix 1 1/2 c Milk 10 oz Frozen Peas & Carrots * 8 oz Medium Egg Noodles ** 6 1/2 oz Tuna, Drained & Flaked 2 oz Shredded Cheddar Cheese *** * Frozen Peas & Carrots should be thawed. ** Egg Noodles should be cooked and drained. *** Cheese should equal 1/2 C 1. Preheat oven to 350 degrees F. In large bowl, blend golden onion recipe soup mix with milk; stir in peas & carrots, cooked noodles and tuna. Turn into greased 2-quart oblong baking dish, then top with cheese. Bake 20 minutes or until bubbling. Received: by 10.231.59.19 with SMTP id j19cs49474ibh; Thu, 9 Dec 2010 11:24:41 -0800 (PST) Received: by 10.227.136.195 with SMTP id s3mr5449536wbt.49.1291922680336; Thu, 09 Dec 2010 11:24:40 -0800 (PST) Return-Path: <firstname.lastname@example.org> Received: from U15184294.mail.maxathome.com (u15184294.onlinehome-server.com [18.104.22.168]) by mx.google.com with ESMTP id w12si3463376wby.36.2010.12.09.11.24.39; Thu, 09 Dec 2010 11:24:40 -0800 (PST) Received-SPF: softfail (google.com: domain of transitioning email@example.com does not designate 22.214.171.124 as permitted sender) client-ip=126.96.36.199; Authentication-Results: mx.google.com; spf=softfail (google.com: domain of transitioning firstname.lastname@example.org does not designate 188.8.131.52 as permitted sender) email@example.com Received: with MailEnable Postoffice Connector; Thu, 09 Dec 2010 11:24:25 -0800 Received: from mail.vfmthdsnd.com ([184.108.40.206]) by mail.maxathome.com with MailEnable ESMTP; Thu, 09 Dec 2010 11:24:22 -0800 Received: from 220.127.116.11 by mail.vfmthdsnd.com (Merak 8.9.1) with ASMTP id RFG93057 for <firstname.lastname@example.org>; Thu, 09 Dec 2010 14:23:57 -0500 Return-Path: email@example.com Status: Message-ID: <20101209142341.1b4b3f1b3c@2c5c> From: "Administrative Support" <firstname.lastname@example.org> To: <email@example.com> Subject: Reverting with the information you asked for. Date: Thu, 9 Dec 2010 14:23:41 -0500 X-Priority: 3 X-Mailer: Direct-Cast MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Blacklist the sending domain. :-) ..for bad taste in casseroles... Some Spam sounds good right now... ahh..Shoyu Spam.. would i love some dat..
OPCFW_CODE
productos = [ {nombre: 'tinto', valor: 1000}, {nombre: 'pan rollo', valor: 500}, {nombre: 'tamal', valor: 8000}, {nombre: 'leche', valor: 3000}, {nombre: 'huevo', valor: 500}, {nombre: 'perico', valor: 2000}, ] busqueda = document.querySelector('#busqueda'); resultado = document.querySelector('#resultado'); filtrar = ()=>{ resultado.innerHTML = ''; texto = busqueda.value.toLowerCase(); for(let producto of productos){ let nombre = producto.nombre.toLowerCase(); if (nombre.indexOf(texto) !== -1){ resultado.innerHTML += ` <li>${producto.nombre} - valor: ${producto.valor}</li>`; }else{ resultado.innerHTML; } } if(resultado.innerHTML === ''){ resultado.innerHTML += ` <li>Producto no encontrado...</li>`; } } busqueda.addEventListener('keyup',filtrar);
STACK_EDU
One of the major benefit of Fourier Transform is its ability to inverse back in to the Time Domain without losing information. Let us consider the same Signal we used in the previous example: A1=10; % Amplitude 1 A2=10; % Amplitude 2 w1=2*pi*0.2; % Angular frequency 1 w2=2*pi*0.225; % Angular frequency 2 Ts=1; % Sampling time N=64; % Number of process samples to be generated K=1; % Number of independent process realizations sgm=1; % Standard deviation of the noise n=repmat([0:N-1].',1,K); % Generate resolution phi1=repmat(rand(1,K)*2*pi,N,1); % Random phase matrix 1 phi2=repmat(rand(1,K)*2*pi,N,1); % Random phase matrix 2 x=A1*sin(w1*n*Ts+phi1)+A2*sin(w2*n*Ts+phi2)+sgm*randn(N,K); % Resulting Signal NFFT=256; % FFT length F=fft(x,NFFT); % FFT result of time domain signal If we open F in Matlab, we will find that it is a matrix of complex numbers, a real part and an imaginary part. By definition, in order to recover the original Time Domain signal, we need both the Real (which represents Magnitude variation) and the Imaginary (which represents Phase variation), so to return to the Time Domain, one may simply want to: TD = ifft(F,NFFT); %Returns the Inverse of F in Time Domain Note here that TD returned would be length 256 because we set NFFT to 256, however, the length of x is only 64, so Matlab will pad zeros to the end of the TD transform. So for example, if NFFT was 1024 and the length was 64, then TD returned will be 64 + 960 zeros. Also note that due to floating point rounding, you might get something like 3.1 * 10e-20 but for general purposed: For any X, ifft(fft(X)) equals X to within roundoff error. Let us say for a moment that after the transformation, we did something and are only left with the REAL part of the FFT: R = real(F); %Give the Real Part of the FFT TDR = ifft(R,NFFT); %Give the Time Domain of the Real Part of the FFT This means that we are losing the imaginary part of our FFT, and therefore, we are losing information in this reverse process. To preserve the original without losing information, you should always keep the imaginary part of the FFT using imag and apply your functions to either both or the real part. figure subplot(3,1,1) plot(x);xlabel('time samples');ylabel('magnitude');title('Original Time Domain Signal') subplot(3,1,2) plot(TD(1:64));xlabel('time samples');ylabel('magnitude');title('Inverse Fourier Transformed - Time Domain Signal') subplot(3,1,3) plot(TDR(1:64));xlabel('time samples');ylabel('magnitude');title('Real part of IFFT transformed Time Domain Signal')
OPCFW_CODE
Understanding the Azure subscription models Azure offers a range of subscription models tailored to accommodate businesses of all sizes. Familiarizing yourself with these subscription models will assist you in selecting the most suitable option for your organization’s requirements. When making your choice, take into account factors such as your organization’s usage patterns, budget constraints, and specific business needs. It is also important to understand the cost structure associated with each subscription model, as pricing can fluctuate based on factors such as usage, geographical location, and service type. To effectively manage costs and optimize resource utilization, Azure provides helpful tools such as the Microsoft Cost Management + Billing portal. This portal enables you to monitor and track your usage and spending across multiple Azure subscriptions. Additionally, Azure Advisor offers personalized recommendations tailored to optimize your Azure resources based on your unique usage patterns and business requirements. The following is an overview of the different Azure subscription models, each designed to cater to specific business needs and circumstances: The free tier is an excellent choice for individuals or small businesses embarking on their Azure journey. It grants free access to a wide range of Azure services, albeit with specific usage limits. This tier is particularly suitable for those who wish to experiment with Azure or test small workloads without incurring costs. It offers a risk-free environment to explore the capabilities and benefits of Azure. The pay-as-you-go subscription model provides the flexibility to pay for Azure services based on your actual usage, without any upfront costs or long-term commitments. This model is well suited for businesses with unpredictable or fluctuating usage patterns. With pay-as-you-go, you have the freedom to scale your usage up or down as needed, ensuring that you only pay for the services you actually use. This allows for greater cost control and agility, making it an ideal choice for organizations that require flexibility in their Azure consumption. Azure for Students Azure for Students is a no-cost subscription model designed specifically for students, providing them with access to a wide range of Azure services for educational and experimental purposes. This subscription model is tailored to support students in their learning journey by offering hands-on experience with cloud technologies. With Azure for Students, students can explore and experiment with various Azure services, gaining practical knowledge and skills that are in high demand in today’s digital landscape. It is an excellent opportunity for students to delve into cloud computing, develop their technical expertise, and unlock the potential of Azure for their academic and personal projects. Enterprise Agreement (EA) EA is a customized agreement designed for larger organizations that have substantial Azure usage. This agreement provides customized pricing discounts and additional benefits that are specifically based on the organization’s size and usage patterns. The EA offers a flexible and scalable solution for organizations to optimize their Azure usage and streamline their cloud operations. By entering into an EA, organizations can unlock cost savings and gain access to specialized support, enabling them to maximize the value and efficiency of their Azure deployment. It is an ideal option for enterprises that require a comprehensive and personalized approach to managing their Azure services and resources. Cloud Solution Provider (CSP) The CSP subscription model is a collaborative partnership between Microsoft and chosen cloud solution providers. Through the CSP program, customers gain access to customized Azure solutions, specialized support, and flexible billing options. This model allows customers to work closely with their CSP partner to design, deploy, and manage their Azure environment according to their specific requirements. The CSP subscription model offers a comprehensive solution for organizations seeking a more personalized and hands-on approach to utilizing Azure services. Microsoft Partner Network (MPN) The MPN subscription model is specifically tailored for Microsoft partners, providing them with a range of valuable benefits to support their business operations. With an MPN subscription, partners gain access to internal-use licenses, enabling them to utilize Microsoft products and services within their own organization for demonstration, development, and testing purposes. Additionally, partners receive training resources, technical support, and valuable insights into the latest Microsoft technologies and solutions. The MPN subscription model helps Microsoft partners enhance their expertise, expand their capabilities, and deliver innovative solutions to their customers. Having a clear understanding of the various Azure subscription models enables you to select the most suitable option for your organization, aligning with your specific requirements and budget. Azure’s cost management and optimization tools play a crucial role in maximizing the value of your subscription by helping you monitor and control costs effectively. These tools ensure that your Azure resources are utilized efficiently, enabling you to achieve the best return on investment while maintaining cost control. Making informed decisions about your Azure subscription and utilizing cost management tools will give your organization the ability to optimize its cloud resources and drive business success.
OPCFW_CODE
class Iteration < ActiveRecord::Base # for use after failed save_with_planned_stories_attributes! attr_accessor :planned_stories attr_protected :project_id belongs_to :project has_many :stories has_many :burndown_data_points validates_presence_of :name, :duration, :project_id validates_numericality_of :duration, :greater_than_or_equal_to => 1, :only_integer => true named_scope :active, :conditions => 'start_date IS NOT NULL AND (end_date IS NULL OR end_date > CURRENT_DATE)' named_scope :pending, :conditions => 'start_date IS NULL' named_scope :recently_finished, :conditions => 'end_date <= CURRENT_DATE AND end_date >= CURRENT_DATE - 7' named_scope :finished, :conditions => 'end_date <= CURRENT_DATE' def validate errors.add(:stories, "must be assigned") if stories.empty? if (active? && project && project.organisation.iterations.active.count >= project.organisation.payment_plan.active_iteration_limit) errors.add(:organisation, "active iteration limit reached") end if (start_date? && !initial_estimate.nil? && initial_estimate <= 0) errors.add(:stories, :not_estimated) end end def save_with_planned_stories_attributes!(attributes) Iteration.transaction do stories.clear # no dependent => :destroy or :delete_all self.planned_stories = project.stories.find(attributes.keys) planned_stories.each do |story| story_attributes = attributes[story.id.to_s] included = story_attributes.delete('include') == '1' if included story.attributes = story_attributes self.stories << story else story.update_attributes!(story_attributes) end end save! end end def name if attributes["name"] attributes["name"] elsif project "Iteration #{project.iterations.count + 1}" end end def to_s name || 'New Iteration' end def story_points_remaining stories.incomplete.inject(0) do |sum, st| sum + st.estimate.to_i end end def start unless active? self.update_attributes( :start_date => Date.today, :initial_estimate => story_points_remaining ) end end def duration=(value) super value if value && start_date? self.end_date = start_date + value.to_i end end def start_date=(value) super value if value && duration? self.end_date = value + duration end end def days_remaining end_date - Date.today end def pending? !active? end def active? ! self.start_date.nil? end def finished? end_date? && end_date <= Date.today end def burndown(width = nil) options = {} options[:width] = width unless width.nil? Burndown.new(self, options) end def update_burndown_data_points return if ! active? || end_date <= Date.today data_point = burndown_data_points.find_by_date(Date.today) if data_point data_point.update_attributes(:story_points => story_points_remaining) else burndown_data_points.create( :date => Date.today, :story_points => story_points_remaining ) end end def self.update_burndown_data_points_for_all_active active.each do |iteration| iteration.update_burndown_data_points end end end
STACK_EDU
/********************************************************** *matrixfinal.c * *USE: matrixfinal (n, where n is n * n matrix) (threads) *Features: Timer! *TODO: make >8192 faster. Or give a warning. * *********************************************************/ #include <stdio.h> #include <stdlib.h> #include <pthread.h> #include <math.h> #include <sys/time.h> #define ELEMENT float /*each element is a float*/ ELEMENT ** matrix_a, ** matrix_b, **result; /*matrices*/ void * generate_matrix( void * id ); /*declaring methods*/ void *dotter(void *); int num_threads, matrix_size; void *dotter(void *arg){ long column; long row; long chunk = (long)arg; long i; ELEMENT t; /*it doesn't get any simpler*/ for (column=0; column < matrix_size; column++){ /*for each column*/ for (row = chunk; row < matrix_size; row += num_threads){/*for each row chunk*/ t = 0.0; for (i=0; i<matrix_size; i++) t += matrix_a[row][i] * matrix_b[i][column]; /*the main math is here*/ result[row][column] = t; /*throw the result in*/ } } return NULL; } /*matrix generator*/ void * generate_matrix(void * id) { long inner_id = (long)id; long i; for (i=inner_id; i < matrix_size; i += num_threads){ int j; for (j=0; j<matrix_size; j++){ matrix_a[i][j]=rand(); matrix_b[i][j]=rand(); } } return NULL; } /*main*/ int main(int argc, const char * argv[]){ struct timeval start, end; /*timer!*/ long totaltime, seconds, useconds; long i, j; int elmt_size; matrix_size = atoi(argv[1]); if (matrix_size > 4096) printf("%d size? Are you crazy?! Running anyway!\n", matrix_size); num_threads = atoi(argv[2]); printf("%d by %d matrix pair with %d threads running\n", matrix_size, matrix_size, num_threads); gettimeofday(&start, NULL); /*init matrix*/ srand(time(NULL)); printf("Creating matrix pairs\n"); matrix_a = malloc(elmt_size = matrix_size * sizeof(ELEMENT *)); matrix_b = malloc(elmt_size); result = malloc(elmt_size); for (i=0; i < matrix_size; i++){ matrix_a[i] = malloc( elmt_size = matrix_size * sizeof(ELEMENT) ); matrix_b[i] = malloc( elmt_size ); result[i]= malloc( elmt_size ); } /*init pthreads here*/ pthread_t * generate = malloc(elmt_size=num_threads*sizeof(pthread_t)); pthread_t * thread_work = malloc(elmt_size); printf("Randomizing matrix pairs\n"); /*making matrix pairs here*/ for (i=0; i<num_threads; i++){ pthread_create(&generate[i], NULL, generate_matrix, (void *) i); } /*joining the randomness back together*/ for (i=0; i<num_threads; i++){ pthread_join(generate[i], NULL); } /*doing actual work here*/ printf("Doing work now\n"); for (i=0; i<num_threads; i++) pthread_create(&thread_work[i],NULL,dotter,(void *)i); for (i=0; i<num_threads; i++) pthread_join(thread_work[i], NULL); gettimeofday(&end, NULL); seconds = end.tv_sec - start.tv_sec; useconds = end.tv_usec - start.tv_usec; totaltime = ((seconds) * 1000 + useconds/1000.0) + 0.5; printf("Total time: %ld milliseconds.\n", totaltime); }
STACK_EDU
[Kyle Neath](http://warpspire.com/) wrote an [excellent piece](http://warpspire.com/posts/designing-github-mac/) about the design process of [GitHub for Mac](http://mac.github.com/). What’s more interesting is his concern about the Mac OS X/Cocoa framework as a whole, it’s the best criticize of Cocoa that I’ve seen for a while, I sincerely recommend any developer interested in Cocoa to read. According to Kyle, a first-time Mac OS X designer/developer, Cocoa is dying for a framework. This argument feels a bit ironic since I came from the age when a bunch of Cocoa fanboys were labeled as the “[Delicious Generation](http://en.wikipedia.org/wiki/Delicious_Generation)” who only wrote fancy good looking apps with no actual functionalities, while the good ‘ol Carbon guys looked so damn unattractive. Has it now come to the downfall of us Cocoa developers?[^1] Kyle’s main reasons for not liking Cocoa are as follows: 1. Drawing in code is slow and painful. Images are easier to work with and result in more performant code. 2. There is no layout engine for Cocoa. If you want two elements to rest side to side, you’ll need to calculate the pixel size of the text, padding, borders, margins — then manually position the next element. 3. There is no styling engine in Cocoa. (To change the background color of a button for instance will require significant changes) 4. Learning the differences between layer-backed views, layer-hosted views — understanding that you have to subclass _everything_ — balancing delegates, weak connections, strong connections, KVC, view controllers, and notifications — understanding little intricacies like how AppKit flips `.xib`s when it load them up or how hard it is to make one word in a sentence bold. I will try to share some of my opinions about them, one by one. #### Is draw in code slow and painful? Being a low level graphics developer for so long, I know my opinion must be biased. But still, I’ve never felt drawing in Cocoa (or Cocoa Touch) to be slow, working with Core Graphics, Cocoa Drawing API and APIs like Core Text is quite pleasant actually. Drawing operations are always blazingly fast. I honestly couldn’t see a better way to solve these problems without resorting to low level. Yes, they are imperative APIs rather than declarative, yes, you have to do your `-drawRect:` code cautiously. But low level drawing itself has never been a bottleneck for my programs. The _real_ problems are **scheduling what to draw** and **finding out when stuff is drawn**. So in my opinion, it’s not the drawing itself that’s slow and painful, but you need to have a **thorough understanding of the graphics stack** to be able to write efficient low level imperative drawing code. That’s a damn steep learning curve. That’s why very few people can write efficient yet complex drawing code even veterans like Loren Brichter recommend to “[do your own drawing](http://blog.atebits.com/2008/12/fast-scrolling-in-tweetie-with-uitableview/)”. However, there is no silver bullet. Cocoa Touch makes life a little bit easier by simplifying the view hierarchy and build with Core Animation from ground up. But to have butter smooth scrolling, you still need to do your own drawing cautiously. Apple did provide layer-based and layer-hosted views in Cocoa, but they are too conservative to re-architect the entire Cocoa view stack with Core Animation, that’s a pity but luckily you can always roll your own (as we always did). Anyway, the best framework I can imagine is the one using a declarative API to construct most of the UI and leverage the full power of GPU, with flexibility to do custom drawing with an imperative API. #### No layout engine for Cocoa? Apparently, Apple is solving this with [Cocoa Autolayout](http://www.scribd.com/doc/50426060/Mac-OS-X-v10-7-Lion). Can’t say much about this (it’s in NDA) but it does look promising. #### No styling engine in Cocoa? Well well, this is controversial. On one hand people are complaining that UI in Mac OS X apps are not consistent anymore, on the other hand people detest [Aqua](http://en.wikipedia.org/wiki/Aqua_(user_interface)) and dream for change from the bottom of their heart. I can understand Kyle since he is more of a designer than a developer. With no doubt I think that’s a fair judgement, but it’s hard to justify whether the styling difficulty contributes to [HIG](http://developer.apple.com/library/mac/#documentation/UserExperience/Conceptual/AppleHIGuidelines/XHIGIntro/XHIGIntro.html) consistency or drives people to the opposite side of it. As a user of Mac OS X I felt that UIs should be “semi-consistent”, or as [John Gruber](http://daringfireball.net) [puts](http://daringfireball.net/2011/01/uniformity_vs_individuality_in_mac_ui_design) it eloquently: [uniformity has been replaced by conformity](http://vimeo.com/21742166). Apple is acting slow and they can definitely improve on this. The solution doesn’t have to be very innovative, adding styling flexibilities here and there is no groundbreaking change. Cocoa Touch is not far superior either, there is still a lot of work to do if you don’t like the standard controls from UIKit. In contrast, my biased opinion would say that [QML/Qt Quick](http://qt.nokia.com/qtquick/) appears to be a much more flexible solution, it’s [XUL](http://en.wikipedia.org/wiki/XUL)/[Air](http://www.adobe.com/products/air/)/[JavaFX](http://javafx.com/)/[WPF](http://en.wikipedia.org/wiki/Windows_Presentation_Foundation) **done right**. Though I am biased, who knows me well should know that I’m hard to convince as a low level graphics engineer. #### Cocoa has complicated intricacies? Yes, any framework survived more than 20 years will have some intricacies, that’s inevitable. Actually I’m surprised by the fact that so many of the [NeXTSTEP](http://en.wikipedia.org/wiki/NeXTSTEP) APIs are still in good use. Come on, it’s a fast changing industry and you really need to be a genius to predict the trend in more than 10 years. However, most of Kyle’s concerns came from his expectations: he expects to fit UIKit model **as is** into Cocoa but it didn’t work. It probably never will, as I said previously, Apple is conservative in that sense, for stable platform like Mac OS X, they tend to provide _new options_ other than forcing everyone to _adapt to_ the new model. That inherently complicates the framework, of course. Changes that us Cocoa developers applauded for can be seen as too cautious by iOS developers. Anyway, I think it’s fair to say that Cocoa isn’t the best API for modern, dynamic UI, alternatives like [Chameleon](https://github.com/BigZaphod/Chameleon) may eventually surpass it. Nevertheless, there is no easy path between both worlds, you either port the code to the new model (UIKit like) or adapt to the hybrid model (Cocoa like). For iOS developers, the former is definitely easier, but for us the latter seems to be a better option. That’s it. I’m surprised that you can read my ramblings to this far 🙂 [^1]: I’ve never labeled myself as part of the “Delicious Generation”, I will rather do something useful if it can’t be good looking at the same time. But still, it’s sad.
OPCFW_CODE
robin at gareus.org Tue Aug 19 00:22:05 EDT 2008 -----BEGIN PGP SIGNED MESSAGE----- Marcos Guglielmetti wrote: > El Lunes, 11 de Agosto de 2008 05:32, krgn escribió: > | hm this kernel hard-locked my machine after only a little while of > | using it with some more or less intense audio processing. what are > | other peoples experiences? I know this sounds terrible, but I am > | glad it does, as it means its not my machine/setup :-) > | cheers for putting this together though, > | karsten > A new one: Alas, same problems as 188.8.131.52-rt1-libre & here it still fails to load the ata_piix, SATA shows up as /dev/hda since generic ide is initialized before ata_piix, SysRq is also disabled again. Uniform Multi-Platform E-IDE driver ide: Assuming 33MHz system bus speed for PIO modes; override with idebus=xx ide_generic: please use "probe_mask=0x3f" module parameter for probing all legacy ISA IDE ports Probing IDE interface ide0... Driver 'sd' needs updating - please use bus_type methods Driver 'sr' needs updating - please use bus_type methods ata_piix 0000:00:1f.2: version 2.12 ACPI: PCI Interrupt 0000:00:1f.2[B] -> GSI 16 (level, low) -> IRQ 16 ata_piix 0000:00:1f.2: MAP [ P0 P2 IDE IDE ] PCI: Unable to reserve I/O region #1:8 at 1f0 for device 0000:00:1f.2 ata_piix 0000:00:1f.2: failed to request/iomap BARs for port 0 (errno=-16) on this debian-PC the '-libre' are the only kernels with this issue. but back to the actual problem: The recent days have seen lots of fixes pass by on linux-rt-users. I just read yesterdays: "hrtimers stuck in waitqueue" quote by Gilles Carry: "Though the test hangs immediately on my ppc64 (8 CPU), it can takes tens of minutes on my x86_64 (8 CPU)..." - he attached a patch to "fix a bug for high resolution timers initialized by hrtimer_init_sleeper (nanosleep and futexes) which can get stuck on a wait queue. They apply onto 2.6.26-rt1" could be *the one* ;) -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) -----END PGP SIGNATURE----- More information about the Linux-audio-tuning
OPCFW_CODE
#include "simd_scan.hpp" #include "util.hpp" // original compression function (with some bug fixes...) for reference void* compress_9bit_input_original(std::vector<uint16_t>& input) { auto bits_needed = BITS_NEEDED; auto mem_size = bits_needed * input.size(); int array_size = ceil((double)mem_size / 64); auto buffer = new long long[array_size](); // TODO allocated on heap! int remaining_buffer_size = 64; int idx_ = 0; for (size_t i = 0; i < input.size(); i++) { long long tmp_buffer = 0; tmp_buffer = tmp_buffer | input[i]; tmp_buffer = tmp_buffer << (i * bits_needed); // TODO undefined behaviour? buffer[idx_] = buffer[idx_] | tmp_buffer; remaining_buffer_size -= bits_needed; if (remaining_buffer_size == 0) { idx_++; remaining_buffer_size = 64; continue; } else if (remaining_buffer_size < bits_needed) { // logic to handle overflow_bits i++; // TODO index out of range here if (i == input.size()) break; // quick fix for that... tmp_buffer = 0; tmp_buffer = tmp_buffer | input[i]; tmp_buffer = tmp_buffer << (64 - remaining_buffer_size); buffer[idx_] = buffer[idx_] | tmp_buffer; idx_++; // Second half tmp_buffer = 0; tmp_buffer = tmp_buffer | input[i]; tmp_buffer = tmp_buffer >> remaining_buffer_size; buffer[idx_] = buffer[idx_] | tmp_buffer; remaining_buffer_size = 64 - (bits_needed - remaining_buffer_size); } } return (void*) buffer; } std::unique_ptr<uint64_t[]> compress_9bit_input(std::vector<uint16_t>& input) { auto element_size = 8 * sizeof(uint64_t); auto compression = BITS_NEEDED; auto buffer_size = compressed_buffer_size(compression, input.size()) / sizeof(uint64_t); auto buffer = std::make_unique<uint64_t[]>(buffer_size); // these are for making the ptr align to 16/32 byte boundaries, // turns out it makes no difference in performance though. //auto buffer = std::unique_ptr<uint64_t[]>((uint64_t*) new __m128i[array_size / 2]); //auto buffer = std::unique_ptr<uint64_t[]>((uint64_t*) new __m256i[array_size / 4]); //auto buffer = std::unique_ptr<uint64_t[]>((uint64_t*) _mm_malloc(array_size * sizeof(uint64_t), 256)); int remaining_buffer_size = element_size; int idx_ = 0; for (size_t i = 0; i < input.size(); i++) { uint64_t tmp_buffer = 0; tmp_buffer = tmp_buffer | input[i]; tmp_buffer = tmp_buffer << (i * compression); // TODO undefined behaviour? buffer[idx_] = buffer[idx_] | tmp_buffer; remaining_buffer_size -= compression; if (remaining_buffer_size == 0) { idx_++; remaining_buffer_size = element_size; continue; } else if (remaining_buffer_size < compression) { i++; if (i == input.size()) break; tmp_buffer = 0; tmp_buffer = tmp_buffer | input[i]; tmp_buffer = tmp_buffer << (element_size - remaining_buffer_size); buffer[idx_] = buffer[idx_] | tmp_buffer; idx_++; // Second half tmp_buffer = 0; tmp_buffer = tmp_buffer | input[i]; tmp_buffer = tmp_buffer >> remaining_buffer_size; buffer[idx_] = buffer[idx_] | tmp_buffer; remaining_buffer_size = element_size - (compression - remaining_buffer_size); } } return buffer; }
STACK_EDU
How do I turn on Ancestry's Relationship Calculator? The "relationship to me" feature has disappeared from my tree on Ancestry. I rely on this heavily not to get sucked down rabbit holes of people not actually connected except by marriage. I've deleted cookies and emptied cache. I've tried a different browser. How can I get it to re-appear? Welcome to G&FH:SE. What program/app/website are you asking about? I used that feature of ancestry.com.au today without a problem. What were the precise and detailed steps that you performed? Hi, welcome to G&FH.SE! If you're talking about Ancestry's tree system, check under the tree settings. The relationship calculator appears when "Who you are in this tree" and "Your home person in this tree" are set to the same person. You can use the Edit link under your question to add information and improve your question. If you need more information about how the site works, see the [help]. Ancestry's Relationship Calculator was apparently introduced around 2010 and is shown in this early blog post: Find out how you are related to other people in your Ancestry.com Member Tree (October 7, 2010). The support article Seeing How People in Your Ancestry Tree are related to you has partial instructions on how to activate the Relationship Calculator. To get started: In your tree, click the tree name menu in the top-left corner and select Tree Settings. On the right side of the page beneath Who you are in this tree, click choose. If “who you are” has already been set and you want to change it, click change. Start typing your name, then select it from the drop-down menu that appears. Once you’ve chosen your name, click Select. Since these articles were written, the way the Relationship Calculator works has changed, but the support article doesn't reflect this. According to Ancestry's Crista Cowan (her "Barefoot Genealogist" videos can be seen on YouTube), to make the Relationship Calculator appear, you also have to set yourself as the home person in the tree. This setting is on the right side of the page, above Who You Are in This Tree. Once you are set as the home person, and who you are in this tree, the Relationship Calculator should reappear in the banner on a person's profile. You can use the Relationship Calculator to check the relationship betwen any two people on your tree, as long as both settings are set to the same person. That is, if you are showing your tree to a child or cousin, set that person as the home person and "who you are", and the calculator will show the relationship to that person instead. Once you are finished, you can restore settings to yourself as "home" and "who you are" to see the relationship to you.
STACK_EXCHANGE
Least significant byte: - .1 Spam sending domain - .2 Multi-hop relay - .4 Dialups not in MAPS DUL - .8 Wants spam compainers to jump through hoops - .16 No working abuse address - .32 Hosts spamers web sites - .64 Hosts spammers email dropboxes - .128 breakin attempts Second least significant byte: - 1. sued or prosecuted DNSBL lister - 2. DOS attack - 4. supplier of spamware - 8. knowingly supports spammers - 16. Legal threats - 32. attempted mail relay exploits - 64. attempted formmail exploits One thing that Blars doesn't have is an automated removal screen. I would have thought that this would have eased his workload. He could manually over-ride and not allow removal requests of confirmed spamming domains, but accidental inclusions could be remedied by users (plus he'd have a record of the removal request in his log files). That's probably why the major lists use this approach. I see that he has already listed our Polish spammers. He has given them a lookup of: 127.1.0.17. Which according to his list, means they are guilty of spamming and of not having a working abuse address. And I concur fully with his listing. This Polish spam should be served on toast! Major lists, please take note ... you might as well list these guys now! I'm sure it won't be long before we see them in spamcomp and dnsbl. Go get 'em Blars! And while on the topic of spam, I received some HAM from www.bizwiz.com. I did actually subscribe to their list, when I was setting up this site. The email was not HTML-only and was only 5.6K in size. But it did have a rather hammy flavour. I thought I had requested to be removed from their list, but I can't remember. Whatever, they had a removal request which I used. The removal screen came back and said that I had been removed. So I just thought I'd make an entry in my blog to make note that Clickit Add-a-link, BizWiz, should not be sending me any more e-mails. (Take care, BizWiz!) The worst offenders, in this area have been TechRepublic. About eighteen months ago, I wanted to view a document on their site and they would not allow me to see it until I signed up. I carefully checked the boxes saying that I did not want to receive any e-mail (apart from the one with my password) from them, and I did not want to receive promotional materials or any other communication. This displayed the usual notice that they would "still respect me in the morning". A week later my first promotional e-mail arrived from TechRepublic. And continued each month thereafter, despite repeated requests from me to stop sending e-mail. Eventually (about Jan 2004) TechRepublic's MX was listed and I did not get any more e-mail. They struggled for a while to get unlisted. And then when they did, resumed sending me e-mail! Typically, these were HTML-only documents advising me to upgrade to Service Pack 2 or that the latest version of MS-Access was available now. Here is a sample of the last one that got through: [ lots of graphics ... ] Security. For such a simple word it causes so many headaches. Attending the Microsoft Security Summit 2005 can help. This year's Summit will include topics as diverse as `Fighting SPAM', `Defending against Malicious Software' and `Tools for Quality Code'. We'll focus on practical skills, processes and technology that can help with your day-to-day security challenges. Whether you are an IT Professional or a Developer this event is also an opportunity to get an update on the latest developments from Microsoft. It's running in Sydney, Melbourne, Brisbane, Adelaide, Canberra and Perth. Best of all it's complimentary - with no charge to attend. So you can come all day or just to the sessions in the agenda which interest you. Places are strictly limited and filling fast. ©2004 Microsoft Corporation. All rights reserved. Microsoft, the Microsoft logo, MSDN and the MSDN logo are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. [ more graphics ... ] You have been selected to receive this e-mail because you indicated you wanted to receive valuable information and product updates from technology vendors when you provided your e-mail address to [ ... blah, blah, blah, etc, etc ... the usual weasel words ] [ and more graphics ... ] In December, I manually added them to my permanent block list. I haven't heard from them since however ... perhaps they've got the message now?
OPCFW_CODE
List comprehension to get rows that contain matching values from 2 seperate dataframes So im trying to match 2 very different dataframes on a single column each containing numbers in string format. I need a concise, very fast solution so i tried using string comprehension and succeeded a few days back and then lost my work, im trying to recreate it. df1=pd.DataFrame({'col':['hey','hi','how ya durn']}) df2=pd.DataFrame({'col':['hey','hi','hello','what']}) df3=df2[[x for x in df2.col for y in df1.col if x in y]] df3.head() So i made this work the other day with 2 dataframes, both 20-30 columns, ~100k rows, different column data except for 1 column each, which im trying to match on. I either get ValueError: Item wrong length # instead of #. or it takes an insane amount of time because the system im using is slow. I know i need to use list comprehension or something faster, and i know .apply() takes too long. both of my matching columns contain 10-15 length numbers in string format. when i got it to work a few days back using a similar list comp one-liner it took seconds to complete and was perfect and now i lost it and cant recreate it lol. any help is greatly appreciated. (p.s. i may have used an any() statement in the list comp, and im 95% sure i used if x in y.) You may have confused yourself. You are using "col" instead of "columns". Try df3=df2[[x for x in df2.columns for y in df1.columns if x in y]] no im just trying to compare those 2 singular columns, similar to df1['col'], i use df1.col then it appears I am the one has who confused myself :) -- what is your desired output? Can you provide a data example of desired output based off the input? By any chance are you trying to do this? See my answer to I think a similar question? https://stackoverflow.com/questions/64345704/how-can-i-merge-a-pandas-dataframes-based-on-a-substring-from-one-of-the-columns/64345815#64345815 ha, i appreciate it anyways. I'm looking to get all of the rows from df2 where all of its 'col' column's values match df1's 'col' column's values. out of the 100kish rows in both of my dataframes im expecting around a thousand matches. almost, it appears that hes trying to append a column based on matches, im trying to filter out rows that match. I'll try fitting that code to my problem real quick to check You can find strings in both columns with df2[df2.col.isin(df1.col)] Out: col 0 hey 1 hi A solution with comprehension would be df2[df2.col.isin([x for x in df2.col for y in df1.col if x in y])] But this gets slow for larger columns The only problems i had with .isin() was that it was slower than if x in y and it wouldnt check for substrings as I may need to match for 'how' inside 'how ya durn' as well.. if that makes sense. Substring matching is a different question. Can you include this case in your example data and your expected output?
STACK_EXCHANGE
I define Serverless as an approach to using the cloud (in my case, AWS) that exclusively uses fully managed services billed based on actual usage. Typically, this means API Gateway backed by Lambda microservices. The approaches detailed here have different pros and cons, and each is suitable for particular use cases. It may be a shock to some, but it is absolutely possible to develop monolithic solutions in Serverless architecture. You simply develop all of your code in a single Lambda microservice. Though, ‘micro’ may not be the correct term for that given the potential size it could become. I have used this approach when rapid prototyping as it is fast, so you can prove a theory or understand a particular approach quickly. However, the result is rarely suitable for a production environment and can end up being a bit spaghetti code. That being said, for small and simple applications such as processing a registration form, this can also be a suitable approach. - Fast, all the code sits together. - Easy, just one lambda to manage. - Low latency because everything happens in a single microservice. - Unlikely to be suitable for production. - High risk of spaghetti code. - Rapid prototyping and experiments. - Simple backends such as a form processor. Layers are a feature in Lambda where you can add code that can be included in one or more microservices. Often these are used to store third-party libraries to help segregate your own code from external code. This helps facilitate updating the third-party code without touching your own code. This can be beneficial, especially for frequently updated open-source libraries. Of course, you are free to use the layer feature as you wish, so another common pattern is to store your own shared scripts and classes there that can then be used by multiple microservices. This can be taken to extremes resulting essentially in a monolith app sitting in the layer and the Lambda microservices being simple gateways with minimal input processing before using the monolith layer for the bulk of the application capabilities. Despite having many Lambda microservices in this design, this is not a true microservice architecture because the microservices are not independent. If the layer breaks, the entire application will break, so it is really just a variant on the monolith. This can be a good approach for simple backends that essentially do the same processing on each type of input. For example, basic CRUD (Create-Retrieve-Update-Delete) processing of multiple API resources or paths with a simple SQL data model. This approach will significantly minimise code duplication compared to an independent microservice approach. - Fast, most of the code sits together in a layer. - Minimise code duplication. - Some segregation through the different lambda microservices enabling a bit of customisation per service if needed. - Risk of complete failure if the layer breaks. - Risk of spaghetti code as most of the code is in a single layer. - Simple backends with very similar processing in each request, such as a CRUD API. Each Lambda microservice is entirely independent in this approach and should give the expected output for a given input even when other microservices are unavailable. I am still partial to using Layers for third-party libraries in this approach, but the layers should not be used for shared resources as this would create dependence. Generally, this can easily be avoided with the right architecture design. A third-party image processing library in a layer should only need to be used by an image processing microservice, for example. While they are two separate components that depend on each other, they are still a contained microservice with a defined capability as a whole. - Highly fault-tolerant, supporting detailed monitoring and self-healing due to independence of microservices. - Individual components are highly scalable in real-time. - Cost-effective and can be beneficial for security and privacy. - Reusable components, microservices can be used across multiple applications. - Risk of high latency if multiple microservices and services are needed for a single request. - Very different application design with a learning curve to do it right. - Complex architecture with many individual components. - Duplicate code is not uncommon, though it can be managed pre-build. - Complex backends for API’s and Serverless applications that do not require servers. - Asynchronous processing pipelines. - Security and other monitoring, automated testing and deployment. On-demand Servers (or containers) While a fully Serverless microservice architecture is an excellent option for many use cases, a microservice does not have a server’s raw processing power. For use cases that involve data science, complex machine learning models, or need heavy processing or GPUs, microservices alone will be insufficient. The on-demand approach is typically asynchronous. A microservice can accept a given request, validate the input and report back to the requester that the request has been received and that it will be processed. The microservice will then launch a container, server, or even a fleet of servers to handle the request. Typically there is a means to monitor when the job has completed or failed, and another microservice that can then finalise, shut down the container or server(s) and notify the requester that the request has completed. - Use server-based resources while still only paying for actual usage. - Access to capabilities not supported by Lambda microservices such as GPU. - Risk of potentially expensive ‘zombie’ resources that needs to be managed. - Latency, it can take time to start the container or server(s). - Servers, their operating systems and software need to be maintained. - Any processing use case that needs more resources (Memory, CPU, execution time or storage) than Lambda microservices provide. - GPU requirements such as machine learning, 3D or video rendering, etc. Island or sectioned approach This approach splits a large and complex backend into smaller sections or islands. Each one, typically, has more functionality than a single microservice as we are not splitting based on functionality but more based on infrastructure needs. Each section or island is given the appropriate architecture design for its needs, and there is an agreed communication structure for each one to communicate as needed with one another. For example, there might be a CRUD API section that uses a shared layer approach. This can pass asynchronous jobs off to a pipeline section that uses a true Microservice approach. There is also a data processing section using an on-demand approach to launch a fleet of servers on a schedule to process large amounts of data. - Pick the best architecture for each section of a complex application. - Not limited to specific cloud services, can use the right tool for the job. - The benefits of each section’s approach can be considered. - Very complex architecture with multiple approaches, not for the faint of heart or inexperienced teams. - The cons of each section’s approach should be considered. - Large complex applications and backends with different needs that cannot be solved with a single approach. Have you tried other approaches or anything to add to the above? connect with me on LinkedIn and let me know! https://linkedin.com/in/thomasjsmart
OPCFW_CODE
Is Windows Unix based? While Windows has some Unix influences, it is not derived or based on Unix. At some points is has contained a small amount of BSD code but the majority of its design came from other operating systems. Is Windows a DOS or Unix? Microsoft’s DOS became the most successful DOS of them all. DOS was never based on Unix at all, which is why Windows uses a backslash for file paths while everything else uses a forward slash. … Unlike most other operating systems, Windows NT wasn’t developed as a Unix-like operating system. Is Windows a Linux based product? All of Microsoft’s operating systems are based on the Windows NT kernel today. Windows 7, Windows 8, Windows RT, Windows Phone 8, Windows Server, and the Xbox One’s operating system all use the Windows NT kernel. Unlike most other operating systems, Windows NT wasn’t developed as a Unix-like operating system. What is Windows OS based off of? It was originally based on NT 6.2 (Windows 8) kernel, and the latest version runs on an NT 10.0 base. This system is sometimes referred to as “Windows 10 on Xbox One” or “OneCore”. What OS is not based on Unix? The most obvious answer is Windows: both the NT line and the older ones are not UNIX based at all. Others I can think of are much less mainstream… The classic Mac OS (before Mac OS X), the AmigaOS incarnations, BeOS. Symbian, Newton OS and the old Palm OS, if you want to look into mobile OSes as well. Is Windows 11 Unix based? But that next Windows 11 is based on the Linux kernel Instead of Microsoft’s Windows NT kernel, it would be far more shocking news than Richard Stallman giving a speech at Microsoft headquarters. Is macOS still based on Unix? You may have heard that Macintosh OSX is just Linux with a prettier interface. That’s not actually true. But OSX is built in part on an open source Unix derivative called FreeBSD. … It was built atop UNIX, the operating system originally created over 30 years ago by researchers at AT&T’s Bell Labs. Is macOS based on Unix or Linux? macOS is a series of proprietary graphical operating systems which is provided by Apple Incorporation. It was earlier known as Mac OS X and later OS X. It is specifically designed for Apple mac computers. It is based on Unix operating system. Is Unix an operating system? UNIX is an operating system which was first developed in the 1960s, and has been under constant development ever since. By operating system, we mean the suite of programs which make the computer work. It is a stable, multi-user, multi-tasking system for servers, desktops and laptops. Is Unix a Linux? Linux is not Unix, but it is a Unix-like operating system. Linux system is derived from Unix and it is a continuation of the basis of Unix design. Linux distributions are the most famous and healthiest example of the direct Unix derivatives. BSD (Berkley Software Distribution) is also an example of a Unix derivative. What OS is macOS based on? macOS makes use of the BSD codebase and the XNU kernel, and its core set of components is based upon Apple’s open source Darwin operating system. macOS is the basis for some of Apple’s other operating systems, including iPhone OS/iOS, iPadOS, watchOS, and tvOS. Why is Unix better than Windows? Unix is more stable and doesn’t crash as often as Windows, so it requires less administration and maintenance. Unix has greater security and permissions features than Windows out of the box and is more efficient than Windows. Is Ubuntu Unix based? Linux is a kernel first released on 5 October 1991 by Linus Torvalds. Now a days, people refer it to mean an UNIX like operating system though. Ubuntu is a Linux distribution. A Linux distribution is an operating system based on Linux kernel, GNU tool set, various others software and software management tools. Is Windows the only OS that is not based on UNIX or Linux? No. The original 16-bit Windows product line, first released in 1985, was developed from scratch at Microsoft to provide a cooperative multitasking graphical user interface on top of PC-DOS and MS-DOS. The last version of Windows released in this product line was Windows ME in 2000. Is Windows free operating system? Microsoft allows anyone to download Windows 10 for free and install it without a product key. It’ll keep working for the foreseeable future, with only a few small cosmetic restrictions. And you can even pay to upgrade to a licensed copy of Windows 10 after you install it. Is there a free version of UNIX? Yes, UNIX is free and open source and can be used personally and commercially. There are many flavors of UNIX which are made with the source code of UNIX and may have Propriety License.
OPCFW_CODE
Re-written more specific question For the details of my segmentation algorithm, see the original question below. I have managed to improve the segmentation considerably by using a different strategy to generate markers. Since I know that my objects of interest are round and quite big, instead of finding peaks of the distance to background I simply use erosion in each plane to keep the "middle" of the objects as markers. And then I apply random walker in 3D. This works pretty well, except I get a strange effect of mislabelling small parts of one object close to the other object. I can remove these small parts by simply removing small or elongated objects in each plane, but there is no way to attach them to the real object, I think. And there is no way to fix the same issue when this small part is connected to the other big object. Here is an example: These are two z-planes from the same 3D image, on the left there was a small part labelled as yellow that I removed by size/shape, on the right you can see how yellow object "invaded" the blue object. (The transparent red spot is the marker used for randomwalker) So the question is, how do I prevent these "invasions"? Since I am segmenting a binary image, I think the beta in the randomwalker doesn't affect anything - I tried raising it to 10000 and don't think anything was different then... I might try doing randomwalker in 2D for each slice, but it gets a bit tricky since the same object can get a different label in different slices (especially if there is only one object in one slice, but two in the next) and then it's difficult to merge them in 3D. This is the first question I am asking here, please let me know if there are any details I forgot to mention. I need to segment a set of anisotropic 3D images - confocal images of DAPI staining of zygotes. I am using scikit-image. I have been struggling with it for a long time, trying to improve the success rate, but whatever I do, I might improve segmentation of some images, but segmentation of others gets worse... I know that in each image in the end I have only two objects of interest (rarely one, but in that case I don't have to analyse the image) - which are nuclei. However, often they are close together and simple thresholding creates just one object where there should be two. And sometimes there is one or two small and much brighter objects in the image (polar bodies) which I need to exclude, but they interfere with thresholding since Otsu often considers my objects of interest background if these are present; I try to tackle them by simply clipping the brightest 1% of pixels in the image, and often it works. My workflow currently involves first processing each plane in the z-stack separately: I do gaussian smoothing, Otsu thresholding, some filtering of small artefactual objects and misshapen holes (I know that only holes I want to keep are round-ish). Here is an example of a relatively easy case where just thresholding works fine - transparent yellow-ish and blue-ish colours are overlaid on the two nuclei (and you can see a small very bright object in the 3-21 planes) which is luckily avoided using the described workflow): But many images are more complicated (and I would prefer the same code to work for all images, and not tailor the segmentation method for each image, since it defeats the purpose of doing it automatically). I tried watershed or random walker segmentation using peaks of distance to background as markers (following http://www.scipy-lectures.org/packages/scikit-image/#marker-based-methods). This is tricky, since defining peaks can be done very differently. The distance to background can be calculated in 2D for each plane, or in 3D; number of peaks (or peaks per label) can be different; peaks can be defined using footprint of different size (and in 2D or in 3D). And, finally, the watershed or random-walker can be applied either to the whole image, or to each plane separately (however for randomwalker with each plane separately I think there is a bug that prevents me from using this approach https://github.com/scikit-image/scikit-image/issues/2783) Playing with these parameters showed me that they can affect the segmentation quality a lot, and it seems ideal parameters for one image might be suboptimal for another; moreover, sometimes applying watershed or randomwalker even reduces the quality of initial segmentation. So I guess my question is: is there anything else I could try that would be more robust, or is there a smart way to choose the parameters for the methods I have tried? The full code is quite long, I put the current version of it onto github: https://github.com/Phlya/ImageScripts/blob/master/zygote_segmentation_improved.py
OPCFW_CODE
Sander Olson was kind enough to offer me a chance to speak… er… type my peace in an interview at nanotech.biz. The link to the interview is HERE. I’m the new interviewee in a list that includes such heavy hitters (and friends/collabs) as Robert Freitas, Tihamer Toth-Fejel, Chris Phoenix, and none other than the good Dr. Hall. From a prior version of the site… This batch script acts similar to GQueue for MacGamess, with the exception that you can’t add files after the .bat file has been implemented (run). This new version automatically goes through a directory containing PC-GAMESS input (.inp) files and runs them without having to specify each name in a long batch file. all one has to do is double-click on the rungamess.bat file from a window and it will automatically start the command prompt and program. There is a limit to the use of the FOR command in DOS that I’ve not yet found a way to overcome, so the actual batch script is divided into two files. The first, rungamess.bat, contains the code that collects the .inp files and passes each name sequentially to execute.bat, the file that actually starts PC-GAMESS and runs the calculation. While an annoyance to have two files, once they are placed you never have to touch them again (unless you want to pass parallel calculation parameters to the executable, which are done in the execute.bat file). The image below shows an ideal setup for running these batch files to automate your calculations. The execute.bat file calls the PC-GAMESS executable as GAMESS.EXE. If you’ve a different name for the program (if you keep track of versions, for instance), go into execute.bat with Wordpad or Notepad and change accordingly. The file names (and batch files) will work universally across windows OS’s provided you stick to the standard 8.3 format. That is, 8 letters for the filename and 3 for the extension. In Windows2000 and higher, long file names work just fine EXCEPT spaces and non-standard characters are prohibited (you’re still running the calculations in DOS mode, remember). With your directory setup, just double-click on rungamess.bat and wait for the jobs to finish. The batch files are provided below in full. You can download each file from the following links (they’re just text files, so you may want to right-click and “Save As.”) The internals are described below each in case changes are needed. When downloaded as the rungamess.txt file, rename this file to rungamess.bat to make it executable. @echo off [lots of echo and REM text] [download the .txt file to read] for %%f in (*.inp) do call execute %%f del DICTNRY del WORK16 del MOINTS del WORK15 del AOINTS del IRCDATA del AOINTS A lot of descriptive stuff about the file, most of which can be ignored. The only thing that may need changing is the deletion of the pause line to remove the break before the calculations go. There for the first time so you can see what’s happening. Note, if you use a different file extension (instead of .inp), then change the batch file accordingly where noted. When downloaded as the execute.txt file, rename this file to execute.bat to make it executable. echo The PC-GAMESS calculation for %1 is currently running. @echo off [lots of echo and REM text] [download the .txt file to read] GAMESS.EXE -i %1 -o %~n1.out move PUNCH %~n1.dat del input Mostly self-explanatory. The first echo line just keeps track on the screen (in the command prompt) of what calculation is going (where in the file list you are). The GAMESS.EXE line is the formal call for the program as per the readme files that come with it. The only file that NEEDS to be deleted is the input file. The PUNCH file comes out as just a name, while the unix version refers to it as a *.dat file. Useful info inside (depending), so I choose to save it by moving PUNCH to filename.dat. Leave in or remove accordingly. That’s it! Any questions, drop a line. The OpenSuse 10.0 CD set, still the only distro to install cleanly on my Precision M70, DOESN’T contain lots of useful developmental software and libraries some of ye olde quantum chemical codes need to run. This is painfully apparent to those who have downloaded the pre-compiled GAMESS-US code and found it not to do run (and what’s with the OSX in the README?) because of missing libg2c files. Yes, even real tech geeks are too lazy to make some days. That may have come out wrong. If your goal is just to get the program running, the quick fix is to feed your machine just the libraries (a proper compilation or, uh, more complete Linux installation would also work, but that’s beyond the scope of this blog). This involves the following: 1. Go to rpm.pbone.net or your favorite rpm search engine. 2. Search for “libg2c” 3. Download the libg2c-3.3.6-3.i586.rpm (or machine specific) rpm. Version 3.3.X because the precompiled GAMESS-US code was done with g77 3.3 and why risk any more hassle. I’ve one local i586 copy on this site for when I’m in the middle of nowhere and don’t want to bother with problematic rpm repositories. 4. On the problematic Linux box, login as root (su). With the libg2c file in the current directory, install the rpm. su rpm -i libg2c-3.3.6-3.i586.rpm (or whatever flavor you have). 5. Wait patiently. 6. Upon completion, GAMESS-US should work just fine in single-processor mode (that is to say, if you’ve got a dual core or dual chip, you should still be having some MPI issues to resolve with Suse).
OPCFW_CODE
import { ActionType, createAction } from 'typesafe-actions'; import { GraphElements } from '../types/Graph'; import { DurationInSeconds, TimeInSeconds } from '../types/Common'; enum GraphDataActionKeys { GET_GRAPH_DATA_START = 'GET_GRAPH_DATA_START', GET_GRAPH_DATA_SUCCESS = 'GET_GRAPH_DATA_SUCCESS', GET_GRAPH_DATA_FAILURE = 'GET_GRAPH_DATA_FAILURE', HANDLE_LEGEND = 'HANDLE_LEGEND' } // synchronous action creators export const GraphDataActions = { getGraphDataStart: createAction(GraphDataActionKeys.GET_GRAPH_DATA_START), getGraphDataSuccess: createAction( GraphDataActionKeys.GET_GRAPH_DATA_SUCCESS, resolve => (timestamp: TimeInSeconds, graphDuration: DurationInSeconds, graphData: GraphElements) => resolve({ timestamp: timestamp, graphDuration: graphDuration, graphData: graphData }) ), getGraphDataFailure: createAction(GraphDataActionKeys.GET_GRAPH_DATA_FAILURE, resolve => (error: any) => resolve({ error: error }) ), handleLegend: createAction(GraphDataActionKeys.HANDLE_LEGEND) }; export type GraphDataAction = ActionType<typeof GraphDataActions>;
STACK_EDU
Problem with NAT(reflection?) after upgrade from 2.0.1 to 2.1.5 We have an external Network A (NET-A) and an internal Network 192.168.9.x. We use Port-forwarding and Pools to redirect Traffic from an external IP to an internal IP/Cluster. Then we have "Manual Outbound NAT rule generation" on with rules wich we find in /tmp/rules.debug nat on $WAN from 192.168.9.13/32 to any -> (NET-A).122/32 port 1024:65535 nat on $WAN from 192.168.9.128/28 to any -> (NET-A.)115/32 port 1024:65535 nat on $WAN from 192.168.9.160/28 to any -> (NET-A).114/32 port 1024:65535 these rules work. Then we have Rules for internal communication which worked in 2.0.1 nat on $LAN from 192.168.9.128/28 to 192.168.9.0/24 -> (NET-A).115/32 port 1024:65535 nat on $LAN from 192.168.9.160/28 to 192.168.9.0/24 -> (NET-A).114/32 port 1024:65535 When I try to connect from 192.168.9.160 to 192.168.9.128/28 via (NET-A.)115 I see the packet in tcpdump: 13:02:33.154806 IP 192.168.9.160.34945 > 184.108.40.206.80: Flags S, seq 2887008773, win 14600, options [mss 1460,sackOK,TS val 253039556 ecr 0,nop,wscale 7], length 0 pfctl -s state | grep 192.168.9.160 em2 tcp 192.168.9.132:80 <- (NET-A).115:80 <- 192.168.9.160:34947 CLOSED:SYN_SENT em2 tcp 192.168.9.160:34947 -> (NET-A).114:45381 -> 192.168.9.132:80 SYN_SENT:CLOSED But no paket leaves the firewall from (NET-A).114 to 192.168.9.132?? Again: this all worked perfectly with 2.0.1 for years. I found the "NAT Reflection mode for port forwards" in the advanced settings but enabling/disabling doesn't change anything. How can I solve the Problem? I found others have Problem with these reflection Rules but no real help:( I replaced the first 3 octects with (NET-A). installed the haproxy-package and changed all LB-Jobs to the haproxy. Now everything is working fine again!
OPCFW_CODE
package day14; import java.util.*; public class NanoFactory { public final Map<String, Reaction> ingredientReactionMap; protected List<Reaction> reactionsOre; protected List<Reaction> reactionsNonOre; protected Map<String, Ingredient> availableIngredients; protected List<Reaction> reactionHistory; protected final boolean verbose = true; public class InvalidReactionException extends Exception {} public class MissingIngredient extends InvalidReactionException { public final String name; public MissingIngredient(String name) { this.name = name; } @Override public String toString() { return "MissingIngredient{" + "name='" + name + '\'' + '}'; } }; public class NotEnoughIngredient extends InvalidReactionException { public final String name; public final int quantityRequired; public final int quantityAvailable; public NotEnoughIngredient(String name, int quantityRequired, int quantityAvailable) { this.name = name; this.quantityRequired = quantityRequired; this.quantityAvailable = quantityAvailable; } @Override public String toString() { return "NotEnoughIngredient{" + "name='" + name + '\'' + ", quantityRequired=" + quantityRequired + ", quantityAvailable=" + quantityAvailable + '}'; } }; public NanoFactory(List<Reaction> reactionList) throws Exception { this(buildIngredientReactionMap(reactionList)); } public NanoFactory(Map<String, Reaction> ingredientReactionMap) { this.ingredientReactionMap = ingredientReactionMap; setupReactions(); availableIngredients = new HashMap<>(); reactionHistory = new ArrayList<>(); } public static Map<String, Reaction> buildIngredientReactionMap(List<Reaction> reactionList) throws Exception { Map<String, Reaction> ingredientReactionMap = new HashMap<>(); for (Reaction reaction : reactionList) { if (ingredientReactionMap.containsKey(reaction.output.name)) { throw new Exception("ingredientReactionMap already contains output key!"); } ingredientReactionMap.put(reaction.output.name, reaction); } return ingredientReactionMap; } protected void setupReactions() { reactionsOre = new ArrayList<>(); reactionsNonOre = new ArrayList<>(); for (Reaction reaction : ingredientReactionMap.values()) { if (reaction.isOreReaction()) { reactionsOre.add(reaction); } else { reactionsNonOre.add(reaction); } } } public void setReactionHistory(List<Reaction> reactionHistory) { this.reactionHistory = reactionHistory; } public int getReactionHistorySize() { return reactionHistory.size(); } public Map<String, Ingredient> getAvailableIngredients() { return availableIngredients; } public static List<Reaction> copyReactionList(List<Reaction> reactionList) { List<Reaction> result = new ArrayList<>(); for (Reaction reaction : reactionList) { result.add(reaction); } return result; } public NanoFactory cloneFactory() { NanoFactory factory = new NanoFactory(ingredientReactionMap); for (Ingredient ingredient : availableIngredients.values()) { factory.addIngredient(ingredient.clone()); } factory.setReactionHistory(copyReactionList(reactionHistory)); return factory; } public void addIngredient(Ingredient ingredient) { if (! availableIngredients.containsKey(ingredient.name)) { availableIngredients.put(ingredient.name, ingredient); return; } int quantity = availableIngredients.get(ingredient.name).quantity + ingredient.quantity; availableIngredients.put(ingredient.name, new Ingredient(quantity, ingredient.name)); } public void removeIngredient(Ingredient ingredient) throws InvalidReactionException { int quantityAvailable, quantityRemaining; if (! availableIngredients.containsKey(ingredient.name)) { throw new MissingIngredient(ingredient.name); } quantityAvailable = availableIngredients.get(ingredient.name).quantity; quantityRemaining = quantityAvailable - ingredient.quantity; if (quantityRemaining < 0) { throw new NotEnoughIngredient(ingredient.name, ingredient.quantity, availableIngredients.get(ingredient.name).quantity); } if (quantityRemaining == 0) { availableIngredients.remove(ingredient.name); return; } availableIngredients.put(ingredient.name, new Ingredient(quantityRemaining, ingredient.name)); } /** * For going from 3 A, 4 B => 1 AB * @param reaction * @throws InvalidReactionException */ public void executeReaction(Reaction reaction) throws InvalidReactionException { for (Ingredient ingredient : reaction.inputs) { // Don't allow ORE reactions // removeIngredient(ingredient); // Allow ORE reactions: try { removeIngredient(ingredient); } catch (InvalidReactionException e) { if (! reaction.isOreReaction()) { throw e; } } } addIngredient(reaction.output); reactionHistory.add(reaction); } /** * For going from 1 AB => 3 A, 4 B * @param reaction * @throws InvalidReactionException */ public void executeReactionReverse(Reaction reaction) throws InvalidReactionException { // Don't allow ORE reactions: removeIngredient(reaction.output); // Allow ORE reactions: // try { // removeIngredient(reaction.output); // } catch (NotEnoughIngredient exception) { // // If it's *just* ore, this is OK // if (reaction.isOreReaction()) { // availableIngredients.remove(reaction.output.name); // } else { // throw exception; // } // } for (Ingredient ingredient : reaction.inputs) { addIngredient(ingredient); } reactionHistory.add(reaction); } /** * BFS/DFS, iterative using stack * @return */ public Integer numOreNeededForOneFuel() throws Exception { printIngredientReactionMap(); Integer minOreNeeded = null; Stack<NanoFactory> stack = new Stack<>(); // Queue<NanoFactory> queue = new LinkedList<>(); addIngredient(new Ingredient(1, Ingredient.INGREDIENT_FUEL)); stack.push(this); // queue.add(this); NanoFactory factory, clone; int hash; Set<Integer> factoryStatesSeen = new HashSet<>(); int maxDepth = 0; while (! stack.isEmpty()) { // while (!queue.isEmpty()) { factory = stack.pop(); // factory = queue.poll(); hash = factory.hashCode(); if (factoryStatesSeen.contains(hash)) { continue; } factoryStatesSeen.add(hash); // logVerbose(stack.size()); logVerbose(factory.getReactionSizeBuffer() + factory); if (factory.getReactionHistorySize() > maxDepth) { maxDepth = factory.getReactionHistorySize(); logInfo("maxDepth=" + maxDepth + " stack.size()=" + stack.size()); } if (factory.hasOnlyOre()) { // logVerbose("HAS ONLY ORE!!!"); if (minOreNeeded == null || factory.getNumOre() < minOreNeeded) { minOreNeeded = factory.getNumOre(); } } boolean didPerformReaction = false; // TODO: We want to go in order from "farthest from ore" to "closest to ore" // for (String ingredientName : factory.getAvailableIngredients().keySet()) { for (Reaction reaction : reactionsNonOre) { // if (ingredientName.equals(Ingredient.INGREDIENT_ORE)) { // // Not cashing ore in for anything // continue; // } // reaction = ingredientReactionMap.get(ingredientName); // if (! availableIngredients.containsKey(reaction.output.name)) { // continue; // } logVerbose(factory.getReactionSizeBuffer() + "?" + reaction); try { factory.executeReactionReverse(reaction); } catch (InvalidReactionException e) { logVerbose(factory.getReactionSizeBuffer() + "->" + e); // if (reaction.isOreReaction()) { // logVerbose("Caught invalid ore reaction, reversing ..."); // factory.executeReaction(reaction); // logVerbose(factory.getReactionSizeBuffer() + factory); // // break; // } continue; } logVerbose(factory.getReactionSizeBuffer(-1) + "->Y"); didPerformReaction = true; break; // clone = factory.cloneFactory(); // logVerbose(clone.getReactionSizeBuffer() + "?" + reaction); // try { // clone.executeReactionReverse(reaction); // } catch (InvalidReactionException e) { // logVerbose(clone.getReactionSizeBuffer() + "->" + e); // continue; // } // logVerbose(clone.getReactionSizeBuffer(-1) + "->Y"); // stack.push(clone); // // queue.add(clone); } if (didPerformReaction) { stack.push(factory); } else { for (Reaction reaction : reactionsOre) { logVerbose("ORE reaction"); clone = factory.cloneFactory(); clone.executeReaction(reaction); stack.push(clone); } } // if (didPerformReaction) { // stack.push(factory); // } } return minOreNeeded; } public boolean hasOnlyOre() { return availableIngredients.size() == 1 && availableIngredients.containsKey(Ingredient.INGREDIENT_ORE); } public int getNumOre() { return availableIngredients.containsKey(Ingredient.INGREDIENT_ORE) ? availableIngredients.get(Ingredient.INGREDIENT_ORE).quantity : 0; } @Override public String toString() { return "NanoFactory{" + "availableIngredients=" + getAvailableIngredientsString() + ", reactionHistory.size()=" + getReactionHistorySize() + '}'; } protected String getAvailableIngredientsString() { String result = "{"; boolean leadingComma = false; for (String name : availableIngredients.keySet()) { if (leadingComma) { result += ", "; } result += name + "=" + availableIngredients.get(name).quantity; leadingComma = true; } return result + "}"; } public String getReactionSizeBuffer() { return getReactionSizeBuffer(0); } public String getReactionSizeBuffer(int plusMinus) { String buffer = ""; for (int i = 0; i < getReactionHistorySize() + plusMinus; i++) { buffer += " "; } return buffer; } @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; NanoFactory that = (NanoFactory) o; return availableIngredients.equals(that.availableIngredients); } @Override public int hashCode() { // TODO: Look into why this doesn't work for input-test-2.txt // return Objects.hash(availableIngredients); return getAvailableIngredientsString().hashCode(); } public void printIngredientReactionMap() { logVerbose("ingredientReactionMap:"); for (String ingredientName : ingredientReactionMap.keySet()) { logVerbose("\t" + ingredientName + ": " + ingredientReactionMap.get(ingredientName)); } } protected void logInfo(String message) { System.out.println(message); } protected void logVerbose(String message) { if (! verbose) { return; } System.out.println(message); } }
STACK_EDU
What is design-stage localization? Design-stage localization (DSL) is a way to arrange your workflow so that your localization process and design stage of product development occur in parallel. It is similar to continuous localization, but there is one crucial difference. Traditional continuous localization starts the translation process after the design stage is complete, while DSL begins at the design stage. Why design-stage localization? Because of significant efficiency improvements. Especially for agile teams or companies that rely on Continuous Integration and Continuous Deployment (CI/CD). Let’s see why. Linguists and designers can see localized prototypes directly on the designs and give real-time feedback. They don’t need to wait for the product to be developed for them to check the output. On top of that, feedback from translators and designers is made actionable spontaneously with little wait time, and updates are handled quickly. Reduced time to market As we’ve already discussed, DSL allows localization to be carried out in parallel with the design and development stages of product/software development. It complements the nature of agile software development, which means localization becomes part of product development and not an afterthought. Developers and other stakeholders do not necessarily need to wait for translators for them to launch, as opposed to the traditional localization workflows that you can see in the image below. Prevent L10N issues earlier Design-stage localization allows UI engineers or developers to optimize and account for UI layout issues caused by text expansion and contraction before the development. Traditionally, these issues would’ve been identified and fixed post-release, which slows down your time-to-market, ultimately costing you money. Design-stage localization allows designers, developers, marketing teams, linguists, and other involved parties to seamlessly communicate and collaborate without dealing with pesky emails. And it all happens within the design stage, and even better, within the design tool itself! All of this is done before the product is even built! By providing visual context, translators can produce quality translations, which will impact the overall user experience. Developers can work efficiently without a hassle by using screenshots to identify where key names are placed. Now that we have seen why we could consider implementing DSL into our workflow, let’s see how it works. Design localization workflow This type of workflow is flexible and can be tailored to your technology stack and existing workflows. - Designers start working on the product’s UI in the source language and assign each string with a relevant key name. Once finalized and approved, the keys and strings are pushed to a translation management system (TMS) or a platform for translators to start the localization process before the product goes into development. - Linguists translate, proofread, sign off, and publish the localized strings. Designers pull the strings within the design tool, review the localized prototypes, and optimize them further. - Developers get the translation keys from screenshots generated from the design tool sent by the designers, and develop the product with reference to the localized prototypes. The product is then tested by the QA team and finally released to multiple markets. As mentioned above, this workflow can be adjusted. - Designers could start designing and testing their UIs with machine translations, set character limits, and send out the keys with screenshots to the linguists and developers. - Developers and linguists could start simultaneously. The developers could use key names in the designs instead of the actual language values. Once ready, the developers could pull the strings, test, and prepare for release. Integrating this kind of workflow requires certain tools to really provide value in product development. These tools include: - Translation tools—A translation management system with an API. - Design tools—A Figma, Sketch, or Adobe XD plugin that can connect to your TMS. Design-stage localization demo Now let’s see design-stage localization in action by doing a simple demo. You can access the user interface designs from our Figma page. The designs were created by modifying “Mental Health / Mindfulness Mobile App (Neubrutalism) UI” © Bilal Limi (Licensed under CC BY 4.0) To successfully perform this demo, you’ll need these tools: - A Locize account (we’ll use it as our TMS platform). After copying the designs from our page, navigate to the Resources tab highlighted in blue and search for the Trans Localize plugin. After that, click Run. Finally, our plugin will be displayed in your Figma project. The functionalities of this plugin are straightforward: - Pull—To pull translatable strings to the Locize. - Push—To push translatable strings to the Locize. - Display Keys—To display translation keys on UI designs. As mentioned above, we will use Locize as our TMS, so let’s set it up. Create an account and create a new project named l10n-app. The plugin requires some information before pushing or pulling translation strings. All that can be obtained under the project settings found at the top-right corner. Finally, you can obtain the required information at the bottom part of the page, namely, project ID, API key, and namespace. Displaying translation keys Navigate back to Figma, open our plugin, and click on the display keys to view the translation keys directly on the UI. Despite its apparent simplicity, it’s a powerful feature. It helps engineers or developers know where each translation key needs to be placed. They don’t need to waste time thinking and generating keys, thus improving their efficiency. The screenshot below exemplifies its usefulness. Pushing translation strings To push strings to the Locize TMS, fill in the information required and select the version you want. It will appear automatically after filling in the project ID. Next, click the Save button. And you can now click on push to push the strings to the Locize platform. You should push the strings from your source/reference language. In our case, it’s English (en-US). You should be able to see the number of words and keys just under the namespace. Before we can translate strings, we need to choose the target language. Click the button indicated on the screenshot and add a new language. In our case, we choose Spanish (es). Next, switch to the target language by clicking the EN-US dropdown button and selecting es. Finally, go to the namespaces, click the button on the screenshot, and start translating. Translate all your strings and click the Save button at the top-right corner. After you have completed your translations, you’ll need to publish them. Navigate to your dashboard, click on Show me the publish/CDN details, and click on the indicated button. Finally, your strings will be published! Pulling translation strings In Figma, run our plugin, input the required information, select the target language, click on pull strings, and watch the magic happen! The UI strings will now be in the target language and you can start pointing out design issues related to the target language. From the image above, you can clearly spot text expansion. If the app was localized into Spanish after the first release, or just after development without coding the UI to accommodate text expansion, it would’ve broken the layout, costing you time and money. This plugin is used for demonstrational purposes only, and has limited features; however, it will be supported with new features in the future.
OPCFW_CODE
Hi Folks - Somewhat new to Airtable but have done deep dive and feel pretty good about how things work. I am developing a grant application processing Base that is pretty complicated. One of the necessary evils is long primary fields. Like 25-50 chars. We maybe be able to trim them in the future, but becuse the primary field is so important in Airtable (ie, card headers, text that displays for linked field values, etc etc), it has to be real and not a numerical code etc. My main primary field is like this: 2020-059 - ABC - Smith Tract - Big River State Park. It has been creating all types of issues on import, commas were an issue until I killed them (easy fix) and then creation of “quotation marks” around primary link fields in other tables as I turn primary key links on and off in working on getting the base made. ANYWAY - i had a corruption issue occur where additional spaces got inserted in the primary field, evidently where the key got wrapped in a given view, and then it stuck. see below if you are curious about that or have ideas on how to avoid. Advice on characters to avoid? Commas obviously. Dashes vs " | " marks? Underscore would be way less readable but I may have to try it… I need to make sure this is somewhat bombproof when I role out to 9 staff and we all put our annual work cycle on top of it without alternative methodology in place… Should I auto-populate the primary field with the three components? How would this work with importing/linking additional tables (ie, a many to one budget line table that is initially a large import?) PS - Airtable has no bug or error report category in contacting help. I submitted it under Other Questions or something. Am I missing something or do they really not have an official way to report an issue? I guess most reports are operator error… but hmmmm. Hello - I’ve had an issue where a longish primary field has been corrupted at a point where something has apparently inserted a carriage return that comes out as three blank spaces in the data. This is where the primary field got wrapped in a certain view? See screenshots. First one is the data with carriage return (that I did not insert, I don’t even know how haha) showing that the data wont return to one line even when there is room. second screen shot showing the three spaces that I didn’t put there… Causing a major issue with data importing and auto field linking. some of the issue spots I am seeing are only one extra space, I am now seeing… I’ll try to fix with a cut and past out to excel or a find and replace script run. But I have to use long primary fields, so any help to avoid is appreciated. Thanks, Damon Hearne
OPCFW_CODE
Instructor: 0:00 We have some real-time synchronization now, but honestly, it's quite a bad implementation of that so far. The best way is to simply demonstrate the problem with our current implementation. We do that by simulating some latency. 0:17 I'm going to give both a latency of two seconds. Now, something happens. If I add tea to one of the applications, and coffee to the other one, and I both add them almost simultaneously, they will arrive in a different order. 0:38 That's not the biggest problem because your order doesn't really matter for this application. It gets sweet once we make reservation. If I reserve a coffee for the other application, and it reserves tea. The opposite is also true. 0:56 In one screen, the tea is reserved by Panda and in the other one, it's reserved by whatever that animal is. What is the goal of this issue? We can find the root of the problem by looking at our server logs. Once you make a reservation and we look at a path, we see that that path is index-based. 1:17 It's going to reserve what item is at position four, so whatever user is presented by the number seven. For that reason, we are going to reflect our state and storage by ID. Let's first change that static default data we normally have. 1:40 Next, we change our reduce rate level bits. That's instead of storing everything in an array, it stores everything in an object. This actually simplifies a bunch of things. We don't have to find the GIFs in there anymore. We can just immediately pick it from that collection. 2:00 Finally, we need to make a small change in the UI. Currently, our components is mapping of the array. It doesn't work anymore. Need to map over the values of the object collection. Now, we can save everything and restart our server to make sure it forgets about all the previous patches. 2:18 The interesting thing is now, what happens if we try to introduce the same problem. We see still that the order isn't consistent but if you make a reservation, at least that modifies the correct item. 2:34 We can see that how affects the patches because the patches now are expressed in terms of the ID of the object rather than the index which they left in the state. It's generally key to make sure that you use IDs as much as possible. 2:52 You might be wondering, "Can we also fix the ordering of this application?" The answer is yes, but it's beyond the scope of this tutorial. A few ways in which you could fix this is by, for example, storing a sort index, or by storing a separate collection in which the order is stored.
OPCFW_CODE
To keep this from being voted as a duplicate of this, let me clarify. The other question asks for DM tactics to battle chokepoints. Its answers are all focused on fighting at these choke-points. I ask for ways to avoid PCs creating/using choke-points. I want my PCs to jump into the boss room which I've designed, and explore it during the match. I want them to enter rooms and not stand in the corridors during the entire dungeon. Many classic dungeon maps are modular corridors and rooms, where PCs are expected to gradually progress through. Examples include the Doomvault, Tomb of Horrors, or the Forge of Fury. Advanced DMs can bring a stronger sense of an actual lair filled with enemies, when they start mixing enemies from multiple rooms, but mostly the books describe each room independently, and as far as I've experienced, unless something makes a particularly strong noise, fights are usually contained to the enemies in each separate room. Now, as both a player and as a DM, I've experienced the choke-point strategy. Boss rooms are a prime example, but this works in many situations. Players are faced against the next room, riddled with enemies. The DM reads their campaign book, Yeah, you have 4 undead zombies, 2 living suits of armor, and a weakened Lich looking at you once you open the door. Roll initiative! Ok, so PCs at the door, a long corridor behind them, and enemies within the room. The frontliners now simply stand in front of the doorway (effectively blocking it) and the nukers blast from behind and drop massive AoEs on the room while enemies do their best to - break the front-line - teleport / run-away with some hidden exit - spam the party with AoE back - attack the backliners with ranged attacks suffering from cover issues (thanks to the frontliners) (This literally happened in my last session. The Bard dropped his Storm Sphere in the room, Druid dropped a Moonbeam, and while the Lich teleported out and tried his best to mess the party up, all the minions inside the room died while slightly bothering our Cleric.) With specific enemies, breaking the party is somewhat easy. If you have on-going AoEs, if you can teleport, if you can protect yourself from the party's on-going spells. But most often, your group of enemies are just a bunch of martial enemies, and they have no way of splitting up the party. As you can see from the maps, rooms are usually very different. They have platforms, and pillars, and parts with difficult terrain, and this all should be something for the party to take advantage of. But its so much easier to just use the doorway as a choke-point, that rooms are basically just used after everything is dead, when searching for loot. How can I incentivize my players to enter rooms and take strategic advantage of each room's layout, when using the door as a choke-point is such an easy and effective strategy in many published adventures? Specifically, I want players to want to enter rooms and fight there, not just stand at the doors. When I design dungeons/enemies from scratch, there are some ways of handling this. However, when using official content, it's gets harder to adapt the environment or enemies, and still keep true to the written content. We don't play in AL, but for example, we're doing Tales of the Yawning Portal, so strategies that also work in such published and mapped out dungeons are preferable.
OPCFW_CODE
In this article, we`d like to share an interview with Simon Lee, active contributor at Stileex.xyz and a managing partner at Atout Persona, one of the leading IT service providers and a pioneer in software innovation in Madagascar. Simon shares the reasons that made Atout Persona move to Jelastic PaaS powered by Layershift and unveils how applying automatic vertical scaling helped to reduce their cloud hosting costs. Quick note about Atout Persona: The company works mainly on the development, integration, and maintenance of informational systems, found favored with more than 600 customers in Madagascar who benefit from their software expertise, including major accounts and institutions: Air Madagascar, Madagasikara Airways, Madagascar G4S, Total, Jovenna, Ministries, etc. Currently, they are developing their activities in the rest of Africa, including Ethiopia, Congo, and Mali. What made you look into PaaS solutions such as Jelastic? In Madagascar, companies use to host their applications on servers (or simple PCs) on-premise, they are not yet accustomed to the cloud. But the machines of our customers were not placed in appropriate environments and were not sufficiently secured as well (no air conditioning, no electrical safety, no physical access control, etc.). To help customers avoid these unsuitable infrastructure issues, we decided to offer them Jelastic PaaS. Have you tried other hosting solutions? Why stopped using them? We used a lot of other solutions before Jelastic! The main ones we selected were AWS, Linode, Digital Ocean, OVH, etc. The service we preferred in terms of functionality was AWS. But this solution is of such complexity to deploy and maintain! In addition, it turned out that it was much more expensive to use than was announced by the supplier. Linode, Digital Ocean, and OVH are interesting solutions, but none offer Jelastic features and none offer the Layershift customer service level. What were the key reasons to host your projects on Jelastic? The reasons that led us to work with Jelastic are simple. We needed an easy, reliable and cost-effective solution for hosting our customers' informational systems. A dedicated server per client is an inconceivable option both from the matter of cost and maintenance operations. A simple VPS did not allow us to set up an n-tier infrastructure, and a set of VPS for a single customer was also very difficult to maintain. So we chose Jelastic for: - its simplicity of use - the possibility of automatic and controlled scaling - the advantageous economic model - containers ready for use We must also admit that we chose Jelastic because we already worked with Layershift for several years and they allowed to test this new (for us) technology. The excellent customer support of Layershift has been a strong argument in the adoption of Jelastic. What stacks do you use to run your projects and how they are interconnected? All the instances that we have are each linked to a client. Generally, we use the classic combination: - reverse proxy - an application server We also create ephemeral instances on Jelastic for our tests and development. What programming languages do you use? Python and PHP are strongly used on the server side. As for Java, we use it for mobile applications, and for some software that we install on the client side (especially to allow offline use of their information system). What is the cost difference? Do you find any benefits from using automatic vertical scaling? We have reduced our hosting costs by almost 30%. Vertical scaling has been one of our main motivations for working with Jelastic. It is undeniable that this system, which has remained simple, allows us today to save money and at the same time cope with peaks in one of the resources (such as during data integration and migration phases). Your general impression and highlights of using the platform. Jelastic has become the most crucial choice of solution for us. When a customer wants to host his informational system, we systematically offer him this platform. The management interface is also to quote: simple, intuitive and functional. It may not have been Jelastic first target market, but this platform definitely has a big role to play in Africa. ****** Imagine that you don't need to worry about server or cloud configuration anymore, focusing only on the development of your software and getting more users. Sounds like a dream? Try Jelastic and it will cover routine configuration tasks, and save your hosting costs with pay as you use approach.
OPCFW_CODE
Dear aspiring undergraduate researcher, With an increase in interested undergraduate students seeking a research position in the Jin Lab, it has become necessary to formalize a protocol to ensure new students are familiar with the research being conducted and recognize what is expected of them before they begin experiments. The Jin Lab welcomes highly-motivated students of any background who are genuinely interested in the research. Working in this lab will grant meaningful experience and exposure to relevant, real-world topics. Additionally, independently designed experiments will require students to exercise critical-thinking in tasks such as incorporating proper controls or investigating possible experimental errors. For an easier time adjusting to the lab setting, it is suggested that you be familiar with subjects such as microbiology or biochemistry as many of the concepts and procedures you will encounter in the Jin Lab build upon these foundations. Suggested classes include molecular and cellular biology, genetics, biochemistry, metabolic engineering, and related lab courses. To briefly summarize your undergraduate research experience at the Jin Lab: - You will be helping the graduate students or post docs with their projects to achieve the overall goal of producing value-added products from renewable biomass by means of microbiological engineering. - We will assign you to a mentor (either grad students or post docs) who will train you in basic maintenance and experimental procedures. - Experienced students will work under the continued guidance of a mentor but will be expected to manage their own experiments and continue to help maintain the lab space. - Up to three credits can be received for FSHN 295 (credit for undergraduate research in other departments is available with respective departmental approval). You will be expected to: - Work for at least 2-3 hours a week per credit. - Present one presentation per credit during Wednesday Jin Lab meetings. To be considered for a research position, you will need to gain a basic understanding of the nature of the research at the Jin Lab. Undergraduate students are strongly encouraged to attend the lab meetings for at least four weeks. Lab meetings begin on Fridays at 3:00p in 202 Agricultural Bioprocess Laboratory and Wednesday 4:00p in Room room 1140 of Institute for Genomic Biology. Wednesday meetings are presentations from the students about the results of ongoing projects. Friday meetings are a presentation and discussion of relevant and recent scientific research. Both meetings will give you the opportunity to meet the lab members and ask questions concerning their research. Please feel free to attend the meeting anytime during the semester - you may be asked to introduce yourself to Dr. Jin and the other lab members. After the four week period, those who are interested in the research at the Jin Lab can present potential projects of interest to Dr. Jin who will discuss possible options with you. Once the position is secured with Dr. Jin’s approval, you will need to see Brenda Roy for certification to work in the lab. The certification process will require you to take two online safety quizzes – a lab safety quiz and an IGB lab quiz, attend an undergraduate safety training, and renew your iCard for access to the lab. Once the certification process is done, you will officially be a part of the Jin Lab! For more information, visit: http://openwetware.org/wiki/Jin. If you have any questions, please email Dr. Jin at: email@example.com
OPCFW_CODE
Our great sponsors |1 day ago||7 days ago| |GNU General Public License v3.0 or later||MIT License| Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars. Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones. For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking. Is there a fun way to learn programming? 2 projects | reddit.com/r/learnprogramming | 11 Mar 2022 Happy Adventure Building! If you suddenly want something more powerful - https://github.com/instead-hub/instead/blob/master/doc/stead3-en.md Making an SCP text based adventure - Any suggestions? 2 projects | reddit.com/r/SCP | 20 Feb 2022 For example INSTEAD Engine - docs, game examples. I am making and Undertale text-based adventure game 2 projects | reddit.com/r/Undertale | 20 Feb 2022 Huh, this is a task for an advanced programmer. But. You can take a ready-to-use engine for text games if your task is to make the game, and not to learn a programming language. For example: Instead Engine - docs, game example (download, play online) Oregon trail and similar games for terminal 2 projects | reddit.com/r/linuxquestions | 29 Aug 2021 https://github.com/instead-hub/instead/tree/master/src/tiny minimal console player implementation Lua, a Misunderstood Language 9 projects | news.ycombinator.com | 29 Sep 2022 I'll add LOVR (https://lovr.org/), the 3D analog to LOVE. Haven't used it personally so ymmv. I love Python, but I ought to branch out. I've done some stuff in C# and Java, but never as much as I've done in Python. 2 projects | reddit.com/r/ProgrammerHumor | 21 Sep 2022 If you are into VR, I'd try lovr.org. It allows you to build VR apps with just lua code. Ask HN: Anyone tried development using an Oculus? 5 projects | news.ycombinator.com | 7 Aug 2022 Personally I found LÖVR easy to use, based on Lua. Trying to learn game development so I need to learn coding, am I learning wrong? 2 projects | reddit.com/r/gamedev | 28 May 2022 C# is about as object-oriented as a programming language can get. The object-oriented paradigm is unintuitive and is easy for both beginners and experts to produce bad code in. If you're just starting out, I'd recommend working with a more procedural programming language such as Lua (LOVE2D, LÖVR, Defold). Once you've got that down, C# will be a bit easier to handle. Demoing Our Displays (SimulaVR) 2 projects | news.ycombinator.com | 26 Mar 2022 Try https://lovr.org/ for a really quick and painless way to quickly create a VR app. is there an app like roblox studio so you can have parts acd code them with lua so they can move or something? 3 projects | reddit.com/r/lua | 17 Feb 2022 Some options might be the voxel game engine Minetest and the VR-focused general-purpose 3D game engine LÖVR. How to make a game in lua? 2 projects | reddit.com/r/lua | 15 Jan 2022 love2d.org for 2d https://lovr.org/ for 3d The HTTP of the Metaverse 3 projects | news.ycombinator.com | 10 Dec 2021 Haven't had a chance to play with it but it seems other people like it! What does a 'good' GitHub page like? (Q for the Professionals) 5 projects | reddit.com/r/learnprogramming | 11 Oct 2021 Are there any good 3D Lua game engines? 2 projects | reddit.com/r/lua | 24 Sep 2021 I haven't used it but https://lovr.org/ looks real good. What are some alternatives? raylib - A simple and easy-to-use library to enjoy videogames programming A-Frame - :a: Web framework for building virtual reality experiences. love - LÖVE is an awesome 2D game framework for Lua. OculusQuestMixedRealityForiOS - Mixed Reality app for iOS TIC-80 - TIC-80 is a fantasy computer for making, playing and sharing tiny games. OpenXR-SDK-Source - Sources for OpenXR loader, basic API layers, and example code. Godot - Godot Engine – Multi-platform 2D and 3D game engine Simula - Linux VR Desktop love-typescript-definitions - Write LÖVE 2D projects with TypeScript linux - Linux kernel source tree corona - Solar2D Game Engine main repository (ex Corona SDK) squirrel - Official repository for the programming language Squirrel
OPCFW_CODE
Posted on July 10, 2016 I’m going to be finishing my M.Ed and Student Teaching next year, so I could certainly use a productivity boost. I decided to try OneNote, to see if I could streamline some of my workflow. Did I end up more productive? Yes, and No. On OneNote as a workspace: I actually liked using OneNote for work. I put the app on my phone, and I already had it on my desktop & laptop as part of Win10. This was great because I could type notes for project ideas, or reminders while waiting for class to start or in the waiting room at appointments. You can also set it to sync only on WiFi, which is important because I have a small data-plan. I could also work on projects on my laptop at school, that I had started on my desktop at home and the changes sync when I pick-up the project at home again. I also liked not needing to open my docs folder and load different files if I needed to pull from another project, or had an idea for something else while working on an assignment. I have a tendency to work on an assignment until I get hit a wall, change to another assignment or project and work on that until my brain decompresses/gets over a writing block, and then go back to the first assignment. So being able to quickly access things was a plus. As an experiment I started/shared a notebook with my BFF Petra, that we used to work on a writing project. She is currently living in the lower 48 and that makes collaboration hard–especially as we are on different shifts. The nice thing about OneNote, as apposed to trying to work via messenger or a Google doc, is the way projects can be organized. We can make comments/notes in a text pane next to our work, so you can see them at the same time. Additionally, the way notebooks are organized you can have sections for plot/mechanics, character profiles, etc. and then separate pages in those sections–say one for each main character. In Google Docs either it would be a run-on file, or separate files you would need to switch back and forth to. Petra, who was rather skeptical of using OneNote when I started the project, liked it so well for our collaborative work that she opened another notebook to keep track of random ideas we bounce off each other, usually via messenger. Collecting them in one place is nice because you don’t have try scrolling back through weeks of messages, or remember to C/P thoughts into a file. So, I think OneNote did help me become more productive, in that projects are much more organized and I could easily work where-and-when I had time even if I didn’t have my laptop, latest files on a thumb-drive, or a WiFi connection. On OneNote to WordPress page import: So, half the reason I was willing to try the OneNote program was because you’re supposed to be able to connect it, with a plugin, to your WordPress site. I’m used to hand-coding my posts–1) because it’s familiar/habit, 2) because it lets me manipulate pieces outside what the visual editor accounts for, and 3) so that I have a copy on my hard drive. I figured creating posts in OneNote would allow me to skip some of the hand-coding, and still give me a back-up copy. Not so much. The plugin doesn’t work. The directions, in the plugin, are out of date. And, though people have been asking about the connection issue for a year, the developers have not replied at all or fixed the issue. I tried a couple of times to get this to work. Trawled through pages of results, and any variation of the plugin name/issue that I could think of–and, in the end, came up with bupkis. I can’t legitimize spending more than the 3 or so hours I already have on this, so I’m shelving the plugin experiment as FAILED.
OPCFW_CODE
More thoughts on the VR prototype we are currently working on, specifically on approaching our art style and aesthetic in a light, agile and effective manner. Once we had finalized all of our base rules for the initial aspects of the experience we would be developing in this experiment, the next step was to jump straight into development. As we would be using a RAD dev model here we wanted to spend a little time upfront to establish some aesthetics for the world we would be creating. We had already isolated a low poly art style as one of the base rules for our experience and had gathered various references of what we want our world to be like, the next step was to create an asset pipeline and start creating assets that could be tested in Unreal Engine. As our art team is proficient with Cinema 4D, we chose to use it as our primary 3D modelling tool (also leveraging Maya to make small tweaks when needed). Our texturing was done with a mix of Adobe Photoshop & Illustrator. Once we were happy enough with the assets for the prototype – we imported them into Unreal Engine, where most of the lighting and shading work for these assets would be done to achieve the look and feel of the prototype. To start we crafted a low poly terrain in Cinema 4D – upon importing this terrain into Unreal Engine the model looked like this: Electing to leverage simple flat colors as opposed to textures on the terrain – we knew that we would have to create materials without textures within Unreal Engine to achieve desired low poly look our design team had set as a goal of our experimental mission. In this aspect Unreal Engine shines – as it offers a material editor that enables us to create shaders with a simple graphical user interface. For the first test we simply applied materials with colors to see how the output would look and feel on the terrain. With the simple material outlined above we achieved the following output. While reminiscent of the Emerald City of Oz , which was kinda cool, this was not the look we were looking for and would require some additional changes to the material. Since what we need is to have each plane in the mesh highlighted properly to give the low poly look, the obvious thing to fix is to get the normals of the mesh correct. With the a little experimentation we made the following changes. Other than fixing the normals, I added few parameters to change the roughness and specular values in the material and with these simple tweaks we attainted the this output: Adding a couple more materials and a little lighting evolved the the terrain to look like this: This got us a result that we was close to our references and achieved the the look and feel that we were aiming for. We then applied a similar version of this process for the other assets that we will be using to populate this level. In addition to these assets that we would be placing to create the level we would need to model an approach to water that would encapsulate the aura we are looking to achieve as a base and elaborate and iterate upon as we move forward on our prototype. We have now begun to play with incorporating some of the game mechanics and level design principles we have outlaid and the island has begun to look something like this: We will continue to share updates as the prototype advances and will hopefully begin to share some playable demos in the next 4-5 weeks. For more information on augmented and virtual reality software development, virtual reality development cost and virtual reality programming, please feel free to reach out to us at CXR.Agency and we would be more than happy to assist. At CXR.Agency, we make sure to keep our pulse on all things AR, VR and XR. Our mission is to reimagine how people interact with brands. To disrupt the status quo and uncover values others can’t find. To solve tomorrow’s business challenges in thoughtful, elegant ways. We aim to be strategic leaders in emergent technologies, innovators in user experiences. Our mission is to arm businesses for digital revolution. Check out our VR case studies at CXR.Agency for more information.
OPCFW_CODE
At GDC 2018, attendees will have the opportunity to interact with an array of sponsors who help fuel the games industry, including our Diamond Partners, whose support plays an integral role to the success of GDC. To introduce you to our Diamond Partner Intel, we've reached out to Intel director of consumer software marketing Steve Augustine to ask about the future of what Intel is doing for game developers, and what attendees can expect from Intel during GDC! Would you please introduce yourself and explain what we can expect from Intel at GDC? Intel is an essential partner for game developers who want to optimize PC performance as well as maximize their consumer reach on the one of the largest gaming platforms in the world. We hope devs will connect to the Intel® Game Developer program for tools, resources, and opportunities to help them bring the best game experience to the biggest worldwide audience. We invite developers to meet up at our booth for hands on demos with new technology, get advice from Intel engineers, attend one of our technical training sessions or just stop by for a pick up eSports match. Our partner, Green Man Gaming, will provide one-on-one publishing consultations. Intel will host key events, including our University Showcase (for students), a diversity mentorship café, and mixers with our partners, including Unity. Intel provides significant documentation and support for devs through the Intel® Game Developer program. Why does Intel provide universal accessibility to its tools? A strong open ecosystem is the foundation of what makes the games industry so great, and Intel is a driving force behind its technical innovation. We believe that the most exciting advances and opportunities for game developers are on Intel® technology, and we invite you to the Intel Game Developer program for additional benefits. Regardless of your experience level or where you are in your development cycle, once you join in our program you, as a game developer, will have access to powerful Intel® performance analyzers, libraries, and other software tools to help you test and optimize your games for the very best experience on Intel® architecture. Our tools, such as the Intel® Graphics Performance Analyzer, are free, and our programs are open to everyone. What does Intel’s future path look like? We recently launched our new Game Developer program, offering extensive benefits and new go-to-market opportunities that we hope members will take advantage of: · Free, premium tools – Includes powerful performance analyzers, libraries, and other software tools to help you test and optimize your game for a great experience on Intel® architecture. · Intel® testing certification – Ensures your game delivers a stellar experience on Intel® processors. · Knowledge base – Provides access to resources (training, articles, tips) and people (forums, events) that help you finish your game and go to market. · Community - Get inspired, make connections, and get answers to your programming questions on Intel® Dev Mesh. · Networking opportunities — Reach a wider audience, gain credibility, and engage with the industry with opportunities to share your game or speak at workshops and events, tradeshow collaborations, contests, sponsorships, and more. · Level-up fellowship — Network with and learn from other indie developers, as well as from industry experts and partners. Tap into the broader game dev community to increase your knowledge base as you get ready, get noticed, and get big. · Promotional opportunities – Games that receive certification may be included in Intel showcases, and they might also be selected for additional opportunities such as PR campaigns, social promotions, and publishing by Green Man Gaming. · Monetization opportunities - Select games are chosen for further sales opportunities with Intel. What will GDC attendees see at Intel’s booth this year? · New and exciting games from up and coming developers optimized for Intel technology. · Learn about developing games on Intel technologies, such as Intel® Core™ processors, Intel® Iris® graphics, and Intel® Optane™ memory. · Learn about the Intel Graphics Performance analyzer, VR techniques for scaling, simulations optimizations, wireless VR, and more! · Meet at the indie lounge for in-depth technical conversations and advice. · eSports matches for devs that wants to blow off a little steam and have fun · Connect with Green Man Gaming will be sharing the inside scoop on how to get your game published and generate sales.
OPCFW_CODE
NOTICE: Our Community is moving. Get more information. Utility to identify and remove unnecessary traces Heavy usage of copy\paste and cut\paste to create requirements may have resulted in the accumulation of many unwanted traces. Borland strongly encourages the deletion of unwanted traces before upgrading to version 2006 of the CaliberRM server. CaliberRM 2006 introduces traceability versioning in order to support the inclusion of traceability information in baselines. During the upgrade, your database size is expected to grow proportional to the number of traces in your database due to traceability versioning. Therefore, you should strive to eliminate any unwanted traces before performing the upgrade to 2006 in order to prevent the database size from growing unnecessarily. This article contains an SDK utility that can help you find and remove unwanted traces from your database. ***This utility is provided on an "as is" basis, without any direct or implied warranty. Borland Software Corporation has no obligation to guarantee its functionality or the integrity of the server data when it is used. You can feel free to use this utility at your own risk. ***As with any unsupported application, it is recommended that you login as the database owner and create a backup of your server data before using this utility. ***This utility has been designed to work with CaliberRM version 2005 Release 2 Service Pack 1 (8.1.534). Please upgrade your CaliberRM server to R2 SP1 before attempting to use the attached utility. In addition you will need to have installed the R2 SP1 version of the CaliberRM SDK on the client machine. Please note that attempts to run this utility with a version other than R2 SP1 will result in This utility finds and deletes CaliberRM traces for the specified set of requirements. It is a wizard-based utility that is primarily intended for identifying and removing extraneous traces that were automatically created by either tree hierarchy or copy-and-paste actions. It can also be used to delete traces for projects that are obsolete or being prepared for deletion. This utility package contains a complete JBuilder 2006 project, including all relevant source code. The "src" directory stores the Java source code for the utility and can be used to modify/recompile the utility. The compiled .jar file and related command file are located in the root directory. To install this utility, simply UNZIP the contents to any local hard drive directory (e.g., -- CaliberRM SDK Please note that this application uses the CaliberRM R2 SP1 SDK to access the CaliberRM server. In order to run it, the CaliberRM R2 SP1 SDK must be installed on the client machine. The CaliberRM client and server components do not need to be installed, though it is often useful to install a CaliberRM client on the machine in order to test connectivity outside of the SDK. -- Version 1.5 JRE Please note that this utility requires that you have a 1.5 version of the JRE installed on your machine. The CaliberRM SDK installs a supported version of the JRE at C:\Program Files\Borland\Java\Sun1.5.0_03 (default install location). This JRE can be used by editing the RMDeleteTracesWizard.com file with notepad such that the full path to the javaw executable is used. The javaw.exe is located in the bin directory of the JRE installation. -- CaliberRM SDK Utilities Please note that this utility requires the "CaliberRM SDK Utilities" framework (version 2006 or higher) as well. Specifically, the jar file "CaliberUtilities.jar" is necessary and must be available when customizing or recompiling this application. Note that you will not need the download the "CaliberRM SDK Utilities" framework if simply plan to execute the application, rather than inspect and/or recompile the source code. The RMDeleteTracesWizard.zip archive is sufficient for those users who only wish to execute the To run this utility, simply edit the "RMDeleteTracesWizard.cmd" command file as directed in the command file comments (if necessary) and then execute it from a command prompt or double click on it. One optional parameter can be supplied from the command prompt to specify the fully qualified file name of the XML configuration file (see below). Most user"s can ignore this configuration file. It isn"t necessary to use; it is simply a XML Configuration File Run-time parameters can be controlled via an XML file, by default named "RMDeleteTracesWizard.xml". If specified, this file can be used to provide default login credentials and to specify logging behavior. If not specified, default settings will be used instead. A sample has been provided and comments in the file indicate the use for each parameter. The name of this configuration file should be the only run-time argument passed to the Java class when it is executed.
OPCFW_CODE
I am trying to connect Sharepoint online list to Power BI. But its not connecting. I am getting the below error " Unexpected Error " Specified Value has invalid CRLF characters. Parameter name: Value" I am not understanding what this error is exactly? Can anyone help me getting resolved this issue? Hi @sdeoghar. It looks like it's common enough that other people have run into this, but I don't know what options you have available to you to fix it. This blog mentions one way that worked, but it involves running an update for SP. There's also this community post that has a few ideas, including logging in as an Organizational Account. From what I can tell, it's not a problem with your data in SharePoint list. Most of the examples I've seen were for SharePoint on-prem. I see you're connecting online. Let us know if nothing above helps. Thanks for your response. I went through your posts, One talks about excel to sharepoint list conection which is not my case. and another post are still in discussion. My scenario is somehow I have to connect my sharepoint list directly to Power BI, which I am unable to do and getting the error. So with that respect, do you have any idea? I'd still suggest trying them out. The list from Excel is through Power Query, which is Get Data in Power BI. They're not 100% the exact same, but it's pretty close. The post that didn't have a resolved answer still had a workaround of logging in as an organizational account. It might not be possible to find someone with the exact same scenario, but you might find several things that are close and have the same solution by typing in the exact error message into Google. Options for Power BI or Power Query will be relevant. I have a online sharepoint list which has a specific URL, and with this URL i am trying to make connection (OData connection) to Power BI . But it throwing error. Based on my understanding, you are trying to connect to SharePoint online list via OData feed data source type in the Power BI Desktop but get errors, right? Based on my test, when we connect to SharePoint online list use OData feed, we need to append _vti_bin/listdata.svc at the end of site URL. For example, the URL of the SharePoint online list is this: https://microsoft.sharepoint.com/teams/Qiuyun01/Lists/list01, then we need to specify the OData URL like this: In your scenario, please check if the URL is correct. And type the URL without copy and paste. In addition, in Power BI Desktop, there is a OOTB data source type for us to retrieve SharePoint online list, it's SharePoint Online List data source. See: If you have any question, please feel free to ask. Thank you very much for your wonderful reply. I tried doing as you said and modified my URL with prefic to it as " _vti_bin/listdata.svc" but I am still getting the same error. I am using sharepoint 2010, so is there anything to do with version. Also to modify the URL , do we have to anything from backend or just type the original URL with adding "vti_bin/listdata.svc" at the end and hit connect. As you can see from my original post, the URL of the SharePoint online list is this: https://microsoft.sharepoint.com/teams/Qiuyun01/Li In your scenario, please also append the " _vti_bin/listdata.svc" after site name to see if the issue persists. Did you try to connect to SharePoint Online list follow the steps in my orignal reply? See this article for more information on using REST with SharePoint 2010. https://msdn.microsoft.com/en-us/library/ff798339.aspx Power Platform release plan for the 2021 release wave 2 describes all new features releasing from October 2021 through March 2022. Click here to read more about the July 2021 Updates Did you know that you can visit the Power Query Forum in Power BI and now Power Apps
OPCFW_CODE
export interface ScriptTagObject { src: string; } export class ScriptTag { localFileName = 'b.js'; constructor(a, b, c) {} protected async exists(): Promise<boolean> { const scriptTags: ScriptTagObject[] = await this.fetchAllScriptTags(); const scriptTagFileNames: string[] = scriptTags.map((scriptTagRecord: ScriptTagObject) => { const tagFileName: string = this.getFileName(scriptTagRecord.src); return tagFileName; }); console.log(scriptTagFileNames, `=====scriptTagFileNames=====`); const fileIsFound: boolean = Boolean(scriptTagFileNames.indexOf(this.localFileName) > -1); console.log(fileIsFound, `=====found=====`); return fileIsFound; } protected async fetchAllScriptTags(): Promise<ScriptTagObject[]> { return [{ src: 'a' }]; } private getFileName(src) { return src; } }
STACK_EDU
Authored By : Anandh Venkatraman, Chief Digital Officer, APJ, EMEA, Dell Technologies Consulting There is no choice over where we will reach based on the road we choose. IOT as a service is one such path that exists for enterprises which are looking for one. After the last three decades of accelerated socio-economic growth with science and technology, there is the next wave of disruption. Internet of Things (IoT) is changing the way we communicate with everyday things. With real, practical products and services gaining importance in our day to day activities starting from fitness trackers to automating homes, offices the wave is already there. 5G connectivity will soon enable new technologies on scale, such as connected vehicles and even connected healthcare. New technologies, new products and new connective potential is also enabling new business models. The next generation of products will need to connect (globally), interact with new partners and ecosystems, and may be available in as-a-service formats (made possible through real-time remote connectivity). The lack of in-house expertise in an enterprise to manage these disparate layers is among the top reasons that have constrained the potential growth of IoT. With numerous protocols and standards out in the market, understanding interoperability of these disparate layers is what adds the most complication. The business forecast of IoT market is already impactful at a global level. For the next, five to ten years Gartner estimates a benefit of USD 2 trillion in 2020, IDC Estimates USD 1.7 trillion in 2020, and McKinsey estimates growth to go around USD 4 trillion to USD 11 trillion. The clear statistic is the there is a growth and curve is steep. For catering this forecasted growth, Gartner notes that IoT and the business models associated are not enterprise ready. The platform of platforms approach towards designing, development, deployment, and operationalization of various IoT applications and business processes would be a solution for this. The Internet of Things has come a long way, from hacks to research projects to efficient, sustainable everyday products, services and applications capturing customer’s imagination and driving a technological revolution. While the industry is trying to realize the full benefits of this ecosystem, the enterprises should address the challenges faces by sensor hardware providers, big data platforms, service integrators, application developers and end users. Platform of platforms There is a need to have a holistic solution, reference architecture to cater most of the enterprise IoT needs. This solution while handling most of the IoT solutions should also be flexible enough for handling cross industry applications, hence a platform of platforms. Our approach on the solution to this problem for applications in capturing from several enterprises and telemetry data sources, processing of petabytes of byte-sized datasets originating from various sensors, advanced machine learning system for recommending actions based on real-time analysis and connecting the end user business systems Over the last few years, we have worked on developing several IoT ecosystems in energy, traffic, healthcare, and automotive. While designing, developing and creating market-ready architecture, we have developed a cohesive IoT PaaS solution keeping in mind rapid design, seamless integration and scalable deployment as the principle. The application pretext for this platform to have capabilities for data streaming, smooth ingestion, event processing, and analytics, elastic hardware and software computing, machine learning compatibility and end-user application integrations. Various Components of a generic IoT System would be: Sensor Communications and Connectivity The platform must support multi-protocol communications and connectivity support. Sensors are connected to the primary stream systems by either a direct internet connection if sensors are wifi enabled. Otherwise, a gateway like Dell Gateway 5100 acts like an edge server that communicates with sensors in various protocols. Edge nodes are responsible for the connections to any sensors, actuators, control systems and assets. For controlling the sensors too, the gateway acts as an interface. Hence a reliable and stable portal is the first touch point for this architecture. Edge Data Manager The primary role of gateway software after connecting with sensors is to tag them through a sensor management layer. Registering of the sensors, creating specific IDs, and connection management is the core component of the edge sensor manager. Once the sensors are designated, the data from sensors will be normalized to a standard format like MQTT. Either the data in the sensors is stored for edge analytics or will be exposed as JDBC connections or HTTPS based REST calls for further processing. Analytics on edge systems becomes critical where the business-critical information needs information in low latency, for example, video analysis systems that need to do the analytics on real-time or healthcare systems that require to make calculations on the gateway. Hence, edge gateways also have an analytics component like EdgeXFoundry deployed to serve the edge analysis component. Platform Data Manager Platform Data Manager is the data center component that acts as a backend platform service to handle: – Data Ingestion: Ingestion manager caters to both real-time and batch-processing needs of the sensor data, in the Lambda architecture, – Data Pre-processing: Standardization, Normalization, Verification, of the the data are done as part of the platform ingestion – Data Aggregations: Raw data is converted into aggregated data marts for analysis and analytics at data manager Platform Analytics Manager Analytics Manager works on the information there in the data lake ingested and pre-processed through the data manager, and various analytics performed over the data in data lake like: – Correlation Analysis: E.g. how to change the ambiance of the room according to the weather outside – Forecasting: E.g. Energy forecasting – Predictive Analysis – Prescriptive Analysis: E.g. Automatic Demand Response Systems Application Manager and Business Layer From the data after analysis and analytics, the platform needs to support business needs with various end-user applications using these could be plugging using PaaS systems like Pivotal Cloud Foundry. At various system integrations security and encryption of data is very prominent, hence different touch points of data across the platform, a security protocol is enabled and logged for the understanding of further lineage and audit. End to End IoT Solution New IT revolution is underway, over the last few years, there have been advancements on scale-out architecture, distributed systems, cloud platforms and the end to end big data platforms and the next would be an IoT platform designed for enterprises. This platform would need to comprise various paradigms for software advancements and cater to business functions and applications. Such platforms will have the capabilities from sourcing the sensor information, realtime ingestion in a distributed system-based hardware and software, data processing and normalization of the data, data science and insights, and lastly end user applications running on the platform as a service. Internet of Things, distributed systems, advanced data engineering, machine learning and advancement in application development is changing the enterprise ecosystem. There is machine generated, hence well-structured new petabytes of data that is arriving every second, machine runs. Applications on this data require a historical understanding of this data and real-time analytics for determining patterns and recommendations. This ecosystem hence demands an enterprise-ready end to end platform, covering systems from edge to on-premise systems to cloud application systems.
OPCFW_CODE
First post to this board. And probably the last! I have done some data mining and analysis to develop rules-based systems for several "uncorrelated" ETFs. I'm going to describe my methods and seek critical feedback from other posters. (Famous words from a one-time poster?) I code in Fortran 90 (the guts) and perl (high-level processing), and also use open-source signal processing software. I'm only trading once per day, and I execute the trades manually (in Robinhood). I'm trading ETFs, not options or futures. I'm using a past "states" to predict the future behavior of the target security. The past states could be 30-day past S&P return, VIX over or under 200 d MA, etc. For each predictor, say 30-day past S&P return, I loop over all possible states (e.g., 0-3%, 3-6%, etc.) and backtest the target security. This generates a report: avg return security lwr upr #trade profit cash ratio power PPT XXXXXXXXXX -0.370 -0.340 3 313.00 1.22 255.97 80119.24 104.33 XXXXXXXXXX -0.340 -0.310 8 -144.75 3.13 -46.24 6693.51 -18.09 XXXXXXXXXX -0.310 -0.280 4 -192.25 1.49 -129.02 24803.41 -48.06 XXXXXXXXXX -0.280 -0.250 4 -197.50 1.68 -117.63 23232.66 -49.38 XXXXXXXXXX -0.250 -0.220 9 -159.25 4.48 -35.51 5655.05 -17.69 rm30_ES.F -0.220 -0.190 14 356.50 17.68 20.16 7188.03 25.46 rm30_ES.F -0.190 -0.160 37 962.45 72.32 13.31 12809.16 26.01 XXXXXXXXXX -0.160 -0.130 46 211.00 89.96 2.35 494.92 4.59 XXXXXXXXXX -0.130 -0.100 79 -67.75 215.26 -0.31 21.32 -0.86 XXXXXXXXXX -0.100 -0.070 136 -226.67 364.57 -0.62 140.93 -1.67 I compute a variety of metrics for each backtested strategy. Most (the ones with the XXXX's) can be discarded immediately, if they lose money, have insufficient trades, insufficient profit/trade, etc. After aggregating "adjacent" strategies from the report, I produce a master report of all the accepted strategies (usually about 200), which I insert into a spreadsheet and rank using cutoffs. In this case, I settled on the 6 best strategies, which I test daily, and buy if the "signal" hits. Here's what the daily analysis looks like for one strategy. I'm trading UPRO and the signal is when a consumer confidence has had a crossover of its 200-day MA in the past 30 days. The backtest shows me when signals were received. If I have a signal since yesterday, I trade. The green dots show cash in the market (often, in the past, this strategy had NO cash in the market) I've only been trading for a month, far too soon to know if I've got something. The combined strategies seem to perform well in backtesting. Here's the combination of 5 trading strategies for XOP, backtested for ~9 years. I'm able to follow the major upswings. I'm not heavily invested during the big downturns. In sideways markets, I'm not heavily invested, but seem to cherrypick the secular increases. Thus far, I don't optimize position sizing, nor the exit timing. I simply hold the target security for 30 days, unless it stops out.
OPCFW_CODE
How about this. What if your mother decided to try out debian and had some questions about it? Maybe she would be overwhelmed and show up at the forum. I got my degree in computer science and I have worked in the SEO industry, so I'm a much more competent google user than most people. Really I'm a lot more competent than 99% of people with any topic pertaining to computers, but I also have a shred of empathy for people who don't have my education and experience, and as this thread shows, there are a lot of people whose lives are so pathetic and meaningless that they really get a kick out of insulting people who aren't as knowledgeable and experienced. But sometimes she won't have the right search phrase Then probably I also include some details to my circumstance, ...usually that results in some one suggesting a better way to phrase the question, or key words, or sometimes, when some details are included, some one knows the answer,I tried using google to answer my question, using these key words "the key words or phrase I used ", but I did not get any results that could help me, The SEO wizards with their "degrees", are constantaly using any forum,website ,blog, where ever they can to put in their phrases and garbage,which causes the search engine to give us results that are mostly advertisements, or completely irrelevant to what we are searching for.but because it is common knowledge in the SEO industry that a page with changing text is seen by the crawlers as an active If someone is not sure about what they find on google, they should post and ask about it, people will tell them , which answer is best, if they also explain enough details.On a Google search you get hundreds of answers that may or may not be correct and if you follow the incorrect advise it may hose your system and or waist your time When someone says "Google it" it means they have no idea what the correct answer is or that they don't want to be bothered helping you. (In which case, why do they even post at all?) It is a type of cyber bullying. They are saying. "I know the answer, but you are not worth helping." Why do people respond to questions with "google it"? Yes, I wish the SEOs and marketing engineers would stop polluting the web with their "self-righteous crap" and advertising, but that is another subject.t is common knowledge in the SEO industry that a page with changing text is seen by the crawlers as an active blog/news site, so the rank is bumped up. Believe me or not, I don't care, just stop polluting the web with your self-righteous crap. GarryRicketson wrote:Why do people respond to questions with "google it"? The answer to that is really quite simple, I respond to question, with "google it", because I have googled it , and found good ,usefull results, and I believe that is the case for most of the others ,when they suggest that someone google it. If you've found good, useful results through your search, why not link to one of those directly? Usually that is the problem for most people, getting the right search phrase, some one with a small amount of intelligence New to Debian (Or Linux in general)? Ask your questions here! Of course this is dependent on the user checking their Control Panel before posting their query... Users browsing this forum: No registered users and 3 guests
OPCFW_CODE
Ubuntu is part of the Linux distribution which is mostly used in enterprise businesses, IoT, Desktops,s and the cloud. Ubuntu is an open-source operating system which is based on Debian and it was introduced in 2004. As a windows user, I wanted to test how the Ubuntu operating system works. Well, during my research I found three ways that you can easily install Ubuntu on Windows 11. Each of these methods has its use, for instance, if you want to use the Linux application as a Windows native app then you should install WSL. However, if you want to test it in the graphical interface then the best option is to use the virtual machine. Because this way it is not going to harm your computer and you will certainly not lose any data. at the same time if you are doing more robust work than dual boot Windows and Ubuntu on multiple or a single Hard drive. 1. Install Ubuntu on Windows 11 with WSL Windows Subsystem for Linux AKA WSL has been introduced recently by Microsoft. My God, using the WSL you can install Ubuntu or any other Linux distro just with a single command. The other news is that you can also use the Linux application as a Windows native app. - Launch the Terminal as Administrator - In the Terminal you must execute this command: wsl –install - After installation of Ubuntu on Windows 11 using the WSL, restart your computer. - Automatically Ubuntu will launch, however it is not launched Automatically search for Ubuntu and configure it manually. - By configuration, I mean, creating a username and password for ubuntu. Note: If you want to use the Ubuntu apps as Windows 11 Native applications, find that package name and execute this command. Sudo apt-get install PackageName -y, For instance, install the Linux text editor on Windows 11 using this command. Sudo apt-get install gedit -y 2. Install Ubuntu on Windows 11 using Virtual Machine There are many virtual machine applications available that you can use to install Ubuntu and test it directly into your windows 11 computer. For instance, there is VirtualBox, There is VMware workstation pro, and VMware Workstation player. And in this step, you are going to learn how to install Ubuntu on Windows 11 using the VMware workstation player. Step 1. Download VMware Player and Ubuntu ISO Image - Launch your favorite browser and search for the VMware workstation player - Open one of the links and download the VMware Workstation player 17. - Following that you have to install the VMware Workstation player 17 on your Windows 11 machine. The installation process is simple, Just launch the setup and then click next to the install section and click finish. (The Installation Process is simple, that’s why I have not covered it Step-by-step). - Again, open your favorite and search for Ubuntu, then download the latest version of this operating system. Step 2. Create an Ubuntu Virtual machine - Launch the VMware workstation player after the installation. - Easily choose to create a new virtual machine. - For this step you need to attach the ISO of Ubuntu to the virtual machine, However for now you have to choose “I will install the Operating system later” and move and move on by clicking on next. - Choose Linux as the guest operating system and Ubuntu as its version. - Name your virtual machine whatever you want. Here you can also save it in any other place that you are interested. By default, it will save it into the documents directory Under drive C. - Specify a specific amount of storage for the Ubuntu of the virtual machine but make sure it must not be less than 20GB. - Simply click finish. - For now, the virtual machine has been created but we have not added the ISO of Ubuntu to the virtual machine. For that, you have to click on Edit virtual machine settings, go to CD/DVD (SATA) choose to use ISO image File, Click browse and locate the ISO of Ubuntu. Once everything is done click OK. Step 3. Install Ubuntu on VMware Workstation Player 17 in Windows 11 - Select the Ubuntu Virtual Machine and click on the play icon from the menu bar. - The first window that you will face is the GRUB bootloader window where you have to simply go with the default option or you wait 30 seconds it will move to that option automatically. - Wait a moment and you will be on the installation page of Ubuntu, simply choose to install Ubuntu. - Simply click continue on the keyboard layout window. - Choose normal installation. - On the partition window, click install now. - And a new window will appear asking that “Write the changes to the disks” simply continue. - Choose your location and click continue. - Add a username and a password to the Ubuntu Virtual machine. - For now, you have to wait until the installation of Ubuntu on the VMware workstation player 17 completes. Once it is complete, the system will restart But before restarting, you need to remove the ISO of Ubuntu. For that click on the reverse triangle icon in Infront of Player> Removeable devices> CD/DVD (SATA)> Disconnect. Now Press enter to restart the Ubuntu Virtual machine. - Log in by entering your password in their Ubuntu. 3. Dual Boot Windows 11 and Ubuntu Dual boot is another option that you can use to install Ubuntu and test it directly into the main hardware. But during the installation of Ubuntu, if you are not careful with the partition, you may wipe out your entire drive. Step 1. Create a Bootable Ubuntu USB Drive - Purchase an 8 GB USB flash drive - Open your favorite browser and search for open Rufus.ie and download the portable version. Also if you do not have the Ubuntu ISO image download it too. - launch Rufus and attached the USB to the computer - click on select and locate the Ubuntu ISO image - Don’t do anything with other options just make sure that the partition scheme should be GPT since Windows 11 does not support MBR. - Click on start and wait until the ISO files are copied within the USB, once everything is done click OK and close the Rufus. Step 2. Extract Storage For Ubuntu - Right-click on the start menu and choose disk management. - Choosing any drive that has storage to offer at least 20GB will be OK. Right-click on it and choose shrink volume. - You will see a specific amount of storage has been extracted in black color, Let it be there. if you have attached another external drive just for Ubuntu for dual boot boot, don’t follow step #2. Step 3. Install Ubuntu Alongside Windows 11 - Attach the bootable Ubuntu USB flash drive to the computer then open the start menu> click on the power button and hold and hold shift and click on restart then choose USB flash drive. (You can shut down the computer, then power on and press the boot key, either way, will work). - You have to wait and soon enough you will be on the Ubuntu installation page, There you have to click install now. - Connect your computer to WiFi. - on the partition window you have to choose something else or else you will remove everything from your hard drive including Windows 11. - We need to create 2 drives, one drive will be for Ubuntu and the other must be for SWAP files. For a SWAP drive, a maximum of 2 GB is fine. Anyway choose the free space that you extracted in step 2, click on the + icon then Enter the amount of storage and spare 2 GB for the SWAP drive. Also, make sure the boxes “Logical” and “Beginning of this Space” are checked. Another thing, Choose / in front of Mount Point. Once everything is done, click ok. - For the Free storage which is remaining, select it and click on the + icon, then find use as and choose SWAP Area. After clicking ok and wait for it until the drive is created. - Select the first partition and click Install Now. - “Write Changes to Disks” will appear, click continue. - The other steps are easily selecting your country, Choosing the keyboard layout, Creating an account, then waiting for the installation to complete. - Finally, once the installation is completed, remove the USB and choose Restart Now. - Once the system restarts, using Choose Ubuntu using the arrow keys and press enter. - Now login and enjoy using Ubuntu alongside Windows 11. You can test anything once you install Ubuntu on Windows 11 computer, it doesn’t matter if it is installed using WSL, Virtual Machine, or dual boot. These are the techniques that I know How to Install and test Ubuntu in Windows 11 and windows 10, what techniques do you know, let me know in the comments.
OPCFW_CODE
no option to include negative prompts in python CLI help? Ran python -m python_coreml_stable_diffusion.pipeline --help but am not seeing an option for negative_prompts, though based on this issue, it seems it should be available unless I'm misunderstanding something? usage: pipeline.py [-h] --prompt PROMPT -i I -o O [--seed SEED] [--model-version MODEL_VERSION] [--compute-unit {ALL,CPU_AND_GPU,CPU_ONLY,CPU_AND_NE}] [--scheduler {DDIM,DPMSolverMultistep,EulerAncestralDiscrete,EulerDiscrete,LMSDiscrete,PNDM}] [--num-inference-steps NUM_INFERENCE_STEPS] [--guidance-scale GUIDANCE_SCALE] [--controlnet [CONTROLNET ...]] [--controlnet-inputs [CONTROLNET_INPUTS ...]] options: -h, --help show this help message and exit --prompt PROMPT The text prompt to be used for text-to-image generation. -i I Path to input directory with the .mlpackage files generated by python_coreml_stable_diffusion.torch2coreml -o O --seed SEED, -s SEED Random seed to be able to reproduce results --model-version MODEL_VERSION The pre-trained model checkpoint and configuration to restore. For available versions: https://huggingface.co/models?search=stable-diffusion --compute-unit {ALL,CPU_AND_GPU,CPU_ONLY,CPU_AND_NE} The compute units to be used when executing Core ML models. Options: ('ALL', 'CPU_AND_GPU', 'CPU_ONLY', 'CPU_AND_NE') --scheduler {DDIM,DPMSolverMultistep,EulerAncestralDiscrete,EulerDiscrete,LMSDiscrete,PNDM} The scheduler to use for running the reverse diffusion process. If not specified, the default scheduler from the diffusers pipeline is utilized --num-inference-steps NUM_INFERENCE_STEPS The number of iterations the unet model will be executed throughout the reverse diffusion process --guidance-scale GUIDANCE_SCALE Controls the influence of the text prompt on sampling process (0=random images) --controlnet [CONTROLNET ...] Enables ControlNet and use control-unet instead of unet for additional inputs. For Multi-Controlnet, provide the model names separated by spaces. --controlnet-inputs [CONTROLNET_INPUTS ...] Image paths for ControlNet inputs. Please enter images corresponding to each controlnet provided at --controlnet option in same order. Version: pip freeze | grep "diffusion" Returns python-coreml-stable-diffusion @ git+https://github.com/apple/ml-stable-diffusion@5d2744e38297b01662b8bdfb41e899ac98036d8b Hello @sarahspak, the issue you referenced points to a PR that added negative prompts to the Swift CLI only. If you would like to add negative prompts to the Python CLI as well, feel free to put up a PR for that. Thanks for bringing this up! got it - happy to do so! I did notice an argument for [negative_prompt](https://github.com/apple/ml-stable-diffusion/blob/ce0be82717016f9a0bbdc6a09202860808203e7e/python_coreml_stable_diffusion/pipeline.py#L112) in the pipeline.py file. Was this feature not fully implemented? Just pushed the change to unblock you, please open another issue if you hit any problems 🙏 thank you @atiorh!
GITHUB_ARCHIVE
updated Sep 20, 2008 11:38 am | 6,654 views SQL injection attacks exploit a security vulnerability occurring in the database layer of an application. Really, this belongs a more general vulnerability which can occur whenever one programming or scripting language is embedded inside another. (So, for this reason, care must be taken when embedding code from one language inside another.) Specifically though, the SQL Injection vulnerability comes about when the user input is either not strongly typed and unexpectedly executed becsuae of this or it is incorrectly filtered for string literal escape characters which are embedded in SQL statements. Example of Incorrectly Filtered Escape Characters The escape characters are passed into a SQL statement resulting in a potential manipulation of the statements executed on the database instance by whoever uses the application (the end user.) The vulnerability is illustrate by the following line of code: statement := "SELECT * FROM users WHERE name = '" + userName + "';" This is supposed to pull the records of a specific user from the user table. But if the "userName" variable is manipulated in a particular way by a malicious user, it can easily do more than the author intended. If the "userName" variable is set as:- 'a' or 't'='t' it will cause the SQL statement to run like this: SELECT * FROM users WHERE name = 'a' OR 't'='t'; In an authentication procedure the evaluation of 't'='t' is always true, because it will force the selection of a valid username. Multiple statements can be executed with one call in most SQL Server implementations. But some SQL APIs (such as the mysql_query from PHP) do not permit this type of call for security reasons. This restriction will stop hackers from injecting queries which are entirely separate. However it does not stop the modification of queries! Just look at the following illustration which shows the vulnerability which still exists: the value, "userName" used in way shown in the SQL statement here would delete the "users" table, but not before revealing all of the data from the "DATA" table! 'a';DROP TABLE users; SELECT * FROM data WHERE name LIKE '%'; The ScanSafe Global Threat report (See Reference below) found that web-based malware increased 278 percent during the period January through June 2008. This was in part due to large websites being compromised in June by SQL Injection Attacks. A large number of these attacks was detected back in March this year. In June, ScanSafe found that SQL injection attacks accounted for 76 percent of all compromised sites. ScanSafe says the increasing numbers of these attacks on legitimate websites can be blamed on automated attack tools, which became freely available in the last months of 2007. Microsoft and Hewlett-Packard launched free tools in June to help web developers and site administrators defend against the rapidly growing number of SQL injection attacks. One of the two Microsoft tools called "UrlScan," is actually an updated version of a tool last released in 2003. and can now scan query strings - not only a URL itself, as before - so that it can filter the malicious strings that power SQL injection attacks. Hewlett-Packard's web security team posted "HP Scrawlr" - short for "SQL Injector and Crawler" - to its website. HP Scrawlr analyses web pages for vulnerability to SQL injection attack, then reports its findings. Related White Papers and Webcasts Career in Microsoft Dynamics AX (Groups) Disclaimer: IT Wiki is a service that allows content to be created and edited by anyone in the community. Content posted to this site is not reviewed for correctness and is not supported by Toolbox.com or any of its partners. If you feel a wiki article is inappropriate, you can either correct it by clicking "Edit" above or click here to notify Toolbox.com.
OPCFW_CODE
Beginner Sets for intermediate devs For a longer time I'm looking for some kind of robotic set, which is programmable, but most things I've found are related to child education or involves a lot of putting all the hardware parts together by yourself. I'm working as a Software engineer for a longer time, so I have a good knowledge of software languages also on c/c++ . I've taken several steps in on the Raspberry pi which was very interesting but shortly it turns out I'm not that much interested in soldering hardware parts or something like that. Can you guys give me some advice on where to start with robotic Programming for adults, without getting to deep into elektronics or hardware engineering, if possible, but to conenctrate on Programming? Any advice would be great Thanks If you want to start programming a robot that's already built, the Scribbler 3 (S3) robot by Parallax, inc. is relatively affordable and comes already assembled. The programming relies on Blockly, a GUI-based programming language for robotics systems. I think it's popular in schools. If you're interested in something a bit more robust, why not look into the iRobot Create® 2 Programmable Robot? It's built from remanufactured Roomba® platforms and comes preassembled. You could customize and program it (direct connection, Arduino, or RaspberryPi). Finally, since you are already and experienced software engineer and are not interested in assembling hardware, you could look into Gazebo and MoveIt! which are two robot simulation system used in the industry (as far as I understand). They both offer a bridge to ROS the Robot Operating System. I hope this helps! The Turtlebot 3 robots are designed to be ROS (Robot Operating System) reference implementations. There are two versions, the Burger and the Waffle. The Burger is the least expensive. They are built of plastic pieces which need to be bolted together. They are very sturdy. It is also possible to use a 3d printer to make more pieces if you want a larger robot, and they have instructions for different types of robots if you don't want one with differential steering. Dexter Industries sells a board that allows a Raspberry Pi to connect to Lego Mindstorms sensors and motors. This allows you to use a much more powerful computer to control robots using the Lego Mindstorms system. And then are a number of systems from Vex Robotics. They might look like toys, but I put them a bit above the Lego Mindstorms in strength. If you prefer metal, then Servo City has a number of kits and parts (including motors, motor controllers, beams, plates, hardware, wheels, etc.) to build robots, though generally you'll have to do your own electronics. I'm planning on using the electronics that came with my Turtlebot 3 robots with my tracked robot from ServoCity. Plus a bunch of other stuff I've collected. One thing I suggest if you want to build less toy-like robots is a 3d printer. This can be useful to make parts that would be difficult or expensive to buy. YMMV, but I prefer the Original Prusa i3 MK3. Of course, I say that because it's what I'm used to. They are a little pricey, but you can do really cool things with the Lego Robotics stuff. You can run linux on the Ev3 bricks and then program it however you want. They have a lot of choices for motors/servos and sensors and everything plugs together pretty easily.
STACK_EXCHANGE
// // CoordinateView.swift // MyTestWorkProduct // // Created by zhoufei on 2018/10/25. // Copyright © 2018年 zhoufei. All rights reserved. // import UIKit //坐标系类型 enum CoordinateType { case UIGraphics case UIBezierPath } //时针类型 enum DirectionType { case onTime case unTime } class CoordinateView: UIView { //坐标系类型 var _coordinateType: CoordinateType = CoordinateType.UIBezierPath var coordinateType: CoordinateType { get { return _coordinateType } set { _coordinateType = newValue setNeedsDisplay() } } //绘制弧度 var _progressValue: CGFloat = 0 var progressValue: CGFloat { get { return _progressValue } set { _progressValue = newValue setNeedsDisplay() } } //时针方向 var _direction: DirectionType = DirectionType.onTime var direction: DirectionType { get { return _direction } set { _direction = newValue setNeedsDisplay() } } //日志 //swift属性默认不支持kvo,需要在属性前手动添加“dynamic” //或者在属性的willSet didSet方法中主动发送通知 @objc dynamic var log: String = "" override func draw(_ rect: CGRect) { let height = rect.size.height let width = rect.size.width let partAnglewidth: CGFloat = 15 let partAngleheight: CGFloat = 10 let arcRadius = height*0.3 let arcCenter = CGPoint(x: width*0.5, y: height*0.5) let atts = [NSAttributedStringKey.font:UIFont.systemFont(ofSize: 16), NSAttributedStringKey.foregroundColor:UIColor.blue] var endAngl = _progressValue*CGFloat(M_PI*2) var clockState = (_direction == .onTime) let content = UIGraphicsGetCurrentContext() //水平线 content?.move(to: CGPoint(x: 0, y: height*0.5)) content?.addLine(to: CGPoint(x: width, y: height*0.5)) content?.addLine(to: CGPoint(x: width-partAnglewidth, y: height*0.5-partAngleheight)) content?.move(to: CGPoint(x: width, y: height*0.5)) content?.addLine(to: CGPoint(x: width-partAnglewidth, y: height*0.5+partAngleheight)) NSString(string: "X").draw(at: CGPoint(x: width-partAnglewidth, y: height*0.5+partAngleheight), withAttributes: atts) //垂直线 content?.move(to: CGPoint(x: width*0.5, y: height)) content?.addLine(to: CGPoint(x: width*0.5, y: 0)) content?.addLine(to: CGPoint(x: width*0.5-partAngleheight, y: partAnglewidth)) content?.move(to: CGPoint(x: width*0.5, y: 0)) content?.addLine(to: CGPoint(x: width*0.5+partAngleheight, y: partAnglewidth)) NSString(string: "Y").draw(at: CGPoint(x: width*0.5+partAngleheight, y: 0), withAttributes: atts) //圆 var des: String = "" switch _coordinateType { case .UIGraphics: des = clockState ? "UIGraphics上下文绘制、顺时针" : "UIGraphics上下文绘制、逆时针" content?.move(to: CGPoint(x: width-arcRadius, y: height*0.5)) content?.addArc(center: arcCenter, radius: arcRadius, startAngle: 0, endAngle: endAngl, clockwise: clockState) case .UIBezierPath: des = clockState ? "UIBezierPath贝塞尔曲线绘制、顺时针" : "UIBezierPath贝塞尔曲线绘制、逆时针" let bez = UIBezierPath(arcCenter: arcCenter, radius: arcRadius, startAngle: 0, endAngle: endAngl, clockwise: clockState) content?.addPath(bez.cgPath) } NSString(string: des).draw(in: CGRect(x: 2, y: 2, width: width*0.4, height: height*0.5), withAttributes: atts) log = String(format: "绘制弧度: %.4f Pi", endAngl/3.14) content?.strokePath() } }
STACK_EDU
Make the imported memory available in functions Thanks for filing a feature request! Please fill out the TODOs below. Feature Hello, I need to access the imported memory inside a functions, then I realized the Caller only show exports, not imports, so I had a lot of trouble to get Wasmtime store working with MaybeUnit, this simple solution segfaults: pub struct State { pub memory: Memory, } let state = MaybeUninit::<State>::uninit(); let mut store = Store::new(engine, state); let memory_type = MemoryType::new(16, None); store.data_mut().memory.write(Memory::new(&mut store, memory_type)?); let store = unsafe { std::mem::transmute::<Store<MaybeUninit<State>>, Store<State>>(store) }; // ... let imports = [memory.into()]; // segfaults below let instance = Instance::new(&mut store, &module, &imports)?; Then I realized the issue is that there's no way for me transmute only the State, I need to transmute the Store which is not recommended, once rust doesn't guarantee the same memory layout: assert_eq!(size_of::<Option<bool>>(), 1); assert_eq!(size_of::<Option<MaybeUninit<bool>>>(), 2); Actually I haven't find any way to use MaybeUnit that doesn't look hacky, and I want to avoid the usage of Option and unwraps in the code, once it bloats the binary with panic data. Alternatives Make the imported memory easily available inside Functions, ex: expose it in the Caller. Use #[repr(C)] on Store, so we can safely transmute it. Where possible I'd recommend avoiding unsafe. If things are segfaulting it's probably due to that, so for example you could store Option<Memory> instead of using MaybeUninit and then there's no need for transmute and this probably won't segfault. Otherwise though is there a problem with storing the memory in State? Otherwise though is there a problem with storing the memory in State? There's no problem, is just there's no examples of that, and is not ergonomic as using exported memory, rust encourages the use of Typestate Pattern, where the state of an object guarantees it is valid. In this case I want to guarantee the memory ALWAYS exists, that's why I don't want to use Option. One example is the NonZeroU32 by knowing the number is never zero, the rust compiler can do some neat optimizations: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=6778305b1980ec60413183a0a1127a4d assert_eq!(size_of::<u32>(), 4); assert_eq!(size_of::<Option<u32>>(), 8); assert_eq!(size_of::<Option<NonZeroU32>>(), 4); In my case I want to guarantee the memory always exists, I don't want to handle the Option::None everywhere, neither use unwrap() everywhere, that's ok if there's no other option, but I think this is something that should be supported somehow by wasmtime, once the Store owns the memory, makes sense I be able to store it together with the store. One thing you could perhaps do is to create a dummy Memory with a throwaway Store which is placed within future Stores as they're created. That would then be overwritten to the "real" memory once the store is created. That way you can store just Memory without having to deal with Option and you won't have to deal with any unsafety either.
GITHUB_ARCHIVE
The OWASP Zed Attack Proxy is an open source way of testing your web applications manually. This course walks through the basic functions of ZAP, giving you a look at ways this tool makes taking advantage of web application vulnerabilities possible. This is a starter course for those jumping into the world of web application security. ZAP is the byproduct of an open source OWASP community project and is used by everyone from those starting out in security, developers, QA testers, to professional penetration testers alike. In this course, Getting Started with OWASP Zed Attack Proxy (ZAP) for Web Application Penetration Testing, you'll learn the process to run your application through a series of tests. First, you'll start by learning the interface and understanding how ZAP works with the browser. Next, you'll discover how to prepare your environment as you setup for the attack. Then, you'll get walked through some of the manual and automated function of the tool, and how new features of ZAP allow you to pull that functionality into the browser. Finally, you'll explore how to report on what you found. By the end of this course, you'll gain the knowledge to have the confidence to be able to step through an application, and find some opportunities to strengthen the security posture of the software. Mike is an information security manager who has worked in the IT field for 17+ years. The focus of Mike's career now centers on pentesting and risk assessments. An active member in the infosec community and attends or speaks at various conferences. Course Overview Hello, everyone. My name is Mike Woolard, and welcome to my course on Getting Started with OWASP Zed Attack Proxy for Web Application Penetration Testing. The Open Web Application Security Project, or OWASP for short, is a free and open community dedicated to securing software. One of the flagship projects is ZAP, Zed Attack Proxy. ZAP is a tool that can be used by security professionals, developers, and quality assurance teams to test for vulnerabilities in applications under development. Inserting scans with ZAP into the SDLC process is the first step towards a stronger and more resilient product. In this course, we're going to cover the interface and understanding how ZAP works with the browser. We're going to learn how to prepare your environment as you set up for the attack. We will walk through some of the manual and automated functions of the tool, and finally, how to report on what you found. This course will give you the knowledge to feel confident. You'll be able to step through an application and find some opportunities to strengthen the security posture of the software. During this course, we're going to talk about some of the more common vulnerabilities found in the web application. A review of the OWASP Top 10 may be beneficial to help you understand why this tool is able to help you discover the various vulnerabilities. I hope you'll join me on this journey as we start the process to learn how to run your application through a series of tests with the Getting Started with OWASP Zed Attack Proxy for Web Application Penetration course, here at Pluralsight.
OPCFW_CODE
Changes in 2.05 (over 2.01) - The network window has had a 'names' button added to it, in which you may configure player names. Use -pname <myname> for newer games. - Sound setup window now has a 'setup music' option which when enabled plays MIDI music whilst setting up the game. - 'No scaled status bar' and 'Open text window' added to Misc window. These respectively give an unscaled (ie smaller) status bar, and open the new text display window on startup. - 'No startup warnings' added to Misc window, preventing warnings being displayed and requiring confirmation during startup. - -precachedemo added to ensure that all the demo is precached, for - Deathmatch games should now always receive the keys they need (problem with deciding the keys required on a level) - 24 bit StrongARM code speed up (over 2fps faster on 320x256x256) - Multiple sprite rotation problem fixed (reported in osiris.wad). Consequently, rotation problems will now be reported as warnings on startup and may be ignored with -noinitwarnings. - Desktop games should now be more playable, especially in network games with new key and mouse handling allowing much better control. - Now displays the 'end of game' (ENDOOM) message when you quit (desktop - Descriptive configuration files are now created. - No longer need to use -devparm to save screenshots in full screen mode. - Now millenium compliant (would have never failed, but was 'wrong') - Messages window, giving a desktop display of the HUD messages in either a fixed pitch font or the system font; useful when messages are turned off in the game. Use -font for the font, -nofont for the system text. -opentextwin will open the messages window initially. -notextwin will disable the window entirely. - Add -moreblood to the 'others' line for more violent blood splatters. - Linear and bi-linear smoothing possible in both 8 and 24 bit for walls - Use -noscaledbar, or select the option from the Misc window, to stop the status bar scaling in variable resolution modes. - Smoothing effects are now available from the Options->Display->Smoothing menu. There are different capabilities between 8bpp and 24bpp. - Use IDRESAMP to change all the effects in one go. - Use IDCACHE to change column caching. Column caching is the 'StrongARM optimisation' and hence will speed up SA redraw if enabled. - Cross hair colour configuration added. - -monsternames can be used to show names over monsters heads. - -diminmenus will dim the background when menus are in use - Iconbar gameplay - Can change resolutions midgame - Changed networking error messages: NetUpdate: netbuffer->numtics > BACKUPTICS Tried to transmit more tics than our buffer can handle TryRunTics: lowtic < gametic Attempted to run a historic tic Attempted to run a futuristic tic Which (even if not completely understandable) are less scarey. - BACKUPTICS is now 16, rather than 12; this should speed network games - Option to display scores in a number of forms : 'level frags' / 'total frags' 'level kills' - 'level suicides' / 'total kills' - 'total suicides' 'level kills' - 'level suicides' + 'level monster kills' / 'total kills' - 'total suicides' + 'total monster kills' - Option to details over player heads, for which -allownaming is required on the controller for Health, Weapon, and Name - Net 'Net options' menu option, allowing some 'in game' control over network game options. Hold - if enabled will ignore settings for exiting levels after a number of kills, or a number of kills ahead. Exit level - exits the level immediately End at - changes the number of kills at which the level ends End ahead - changes the number of kills ahead at which the level ends. -allowcontrol must be used on the controller for these options to be - -pmi allows 'post mortem invincibility'. This means that after dying, you will be invincible for a period. You also cannot shoot for this period. - Can change name whilst in games. - Can change level limitations (eg end at, end ahead) or retain level during Network games between mixed 2.01 and 2.02 or later must not use features which are not present in the lowest of the versions. It is recommended that the lowest DOOM+ version be the controller as this should give the most reliable performance. - Better support for Heretic and Hexen WADs - New driver using new BACKUPTICS. - Added 'List' button to list machines on an Access network. - Added 'drag and drop' facilities for machines in list window (or machine - 'Auto' button to auto-configure all machines on an Access network. - New driver using new BACKUPTICS. - Minor update detailing change to BACKUPTICS.
OPCFW_CODE
Running a breakdown on a terraform project with --terraform-parse-hcl does not show AWS TGW Attachment cost Running a breakdown scan with --terraform-parse-hcl flag seems to omit AWS transit gateway attachments. module.example_module.aws_cloudwatch_log_group.this ├─ Data ingested Monthly cost depends on usage: $0.63 per GB ├─ Archival Storage Monthly cost depends on usage: $0.0324 per GB └─ Insights queries data scanned Monthly cost depends on usage: $0.0063 per GB module.example_module.aws_kms_key.this ├─ Customer master key 1 months $1.00 ├─ Requests Monthly cost depends on usage: $0.03 per 10k requests ├─ ECC GenerateDataKeyPair requests Monthly cost depends on usage: $0.10 per 10k requests └─ RSA GenerateDataKeyPair requests Monthly cost depends on usage: $0.10 per 10k requests OVERALL TOTAL $1.00 Alternatively running the breakdown without the flag (using local terraform plan) seems to work fine! module.example_module.aws_cloudwatch_log_group.this ├─ Data ingested Monthly cost depends on usage: $0.63 per GB ├─ Archival Storage Monthly cost depends on usage: $0.0324 per GB └─ Insights queries data scanned Monthly cost depends on usage: $0.0063 per GB module.example_module.aws_kms_key.this ├─ Customer master key 1 months $1.00 ├─ Requests Monthly cost depends on usage: $0.03 per 10k requests ├─ ECC GenerateDataKeyPair requests Monthly cost depends on usage: $0.10 per 10k requests └─ RSA GenerateDataKeyPair requests Monthly cost depends on usage: $0.10 per 10k requests module.example_module.aws_ec2_transit_gateway_vpc_attachment.this[0] ├─ Transit gateway attachment 730 hours $43.80 └─ Data processed Monthly cost depends on usage: $0.02 per GB OVERALL TOTAL $44.80 I could provide an example project, if needed. Thanks @schroedjan 🙏 If you have an example project that would be super helpful to track down this issue. Hey @aliscott, currently I am not able to reproduce the problem with a simple example project. I need to further replicate our submodule structure and will get back to you once I can reproduce the problem! Thanks @schroedjan let us know if you're able to find a way to reproduce this. @schroedjan and @frabar if you get the chance, can you please try to see if you can reproduce this with v0.10? Unfortunately there's not much we can do if we can't reproduce it. Hey @aliscott , @alikhajeh1 , sorry to have kept you waiting. I really am not able to reproduce the behavior using simple (inline) modules. Maybe the error is based in using Terraform Modules saved to Terraform Cloud Registry?! Using the hcl parsing option I also run into problems downloading remote terraform modules (see #1667). Only after running terraform init locally I'm able to run infracost, resulting in missing resources like stated before. Using v0.10.1 doesn't fix the behaviour but introduces new errors in some workspaces, where resources are showing in the cost estimation that are definitely not part of the terraform plan (and are not showing up with v0.9.24). Maybe it would be better to wait for #1667 to be fixed. Do you have an estimate on when it should be closed? Hey @schroedjan, If possible I'd love to get some more information to try and help resolve this issue: module.example_module.aws_ec2_transit_gateway_vpc_attachment.this[0] seems like it relies on a count attribute. Could you paste the actual line of code that has this in? Also, if this line relies on any other resource/variable values could you indicate what they are. e.g: resource "aws_ec2_transit_gateway_vpc_attachment" "example" { count = var.enabled ? var.attachment_counts : 0 // I need this line subnet_ids = [aws_subnet.example.id] transit_gateway_id = aws_ec2_transit_gateway.example.id vpc_id = aws_vpc.example.id } variable "attachment_counts" { // I need this block and type = number default = 3 } --- module "example_module" { enabled = true // I need this attribute attachment_counts = 4 // I need this attribute } Could you paste the additional resources seen in v0.10.1? Thanks To answer your question regarding https://github.com/infracost/infracost/issues/1667. This is being actively worked on and we hope for a release next week. Thanks Hey @schroedjan, any update on Hugo's questions above? It's hard to progress this one unfortunately without more info Going to close this issue as we've not heard back and we need more info to debug. Feel free to re-open the issue!
GITHUB_ARCHIVE
When you uncheck the 'Publishable' checkbox in the metadata of a structure group and then try and publish a page within it you get an immediate 'Success' status in the publishing queue. However, as expected, the page has not been published. Is this a bug or can anyone provide a valid reason why this might occur? Try to explain this to the Business User: It shows success because it processed correctly every item that this publishing transaction was resolved to. It is unfortunately not as easy to change this behavior as one would think (without performance impacts, which we all know is not what the Publishing Queue needs right now). - You send ItemX to the queue. - Publisher process picks it up - Resolver determines there's nothing to publish because of any reason and creates a List<ResolvedItem> with 0 items - Renderer loops through all items in the list, and there's no errors. - Transaction ends with # of errors = 0 Nevertheless, this is something that we will try to address soon. It would also be incorrect to show it as a failure, because it did not fail, so probably a warning status with "There were no items to publish" message would be the better way to handle it. If you're curious, this is what the Resolver has to say about it: 18:58:39.0317 <4204> Using resolver [Tridion.ContentManager.Publishing.Resolving.StructureGroupResolver] 18:58:39.3256 <4204> Resolving the structure groups [tcm:5-350-4] took 00:00:00.2853292 18:58:39.3285 <4204> StructureGroup tcm:5-350-4 resolved to 0 items. Resolving took: 00:00:00.3327755 18:58:39.3461 <4204> Updating the publish transaction with the list of processed items 18:58:39.3881 <4204> The number of processed items is zero, don't send an empty transport package The Publish Status is related to the Publish Transaction and everything in it, if that does not raise any warnings or errors, it ends up as successful. The funny (and sometimes annoying) result of this is that an empty Publish Transaction, will always return successful instantly. When publishing a Page in a non publishable Structure Group, the Publish Transaction will basically be created without any Pages in there, so it is an empty Publish Transaction, which will thus show up as being successful. I am just adding my 2 cents - I don't think it would be right to disable publish on "unpublishable" Structure Groups, because they can have child Structure Groups which are publishable. As such it is possible that publishing the SG will publish grandchild items even though it's children will not be published. I agree that it would be noce if the publish ever "Successfully did nothing" that the status was "Warning" rather than "Success" - This confuses my clients also. Perhaos file this as an ER (Although I think I already did many years ago) You could create a custom resolver that you use as the last resolver in the chain that checks if the set of resolved items has 0 length... Needless to say it would be much nicer if the product gracefully handled this situation itself but if your customer is insisting something should be done there are options...
OPCFW_CODE
/*** --------------------------------------------------------------------------- /// ConversationNode.cs /// /// <author>Don Duy Bui</author> /// <date>April 22nd, 2017</date> /// ------------------------------------------------------------------------***/ using System; using System.Collections.Generic; using UnityEngine; namespace core.dialog { /// <summary> /// Contains a single node of a conversation. Keep things Serializable /// to allow use with JSONUtility. Must run Process() to make sure all /// data fields are initialized. /// /// NOTE: Make sure the param mods are before the choices /// </summary> [Serializable] public class ConversationNode { private const string CHOICE_DELIM = "[["; private const char CHOICE_ELEMENT_DELIM = '|'; private const string CHOICE_END_BRACKET = "]"; private const string PARAM_DELIM = "<<"; /// <summary> /// Treated as the unique identifier for the conversation node /// </summary> public string title; /// <summary> /// The body of text for the dialog /// </summary> public string body; /// <summary> /// The RAW string that contains all the tags which are applied /// to this ConversationNode. Space Delimited. /// </summary> public string tags; /// <summary> /// The id of the conversation tree. /// </summary> public string treeID; /// <summary> /// The sprite image of the character talking. /// </summary> public string image; /// <summary> /// The actual text we want to display /// </summary> public string displayBody; /// <summary> /// The list of parameters or triggers that will get handled at /// the displaying of this conversation node. /// /// </summary> public List<ConversationParamModifier> paramMods; /// <summary> /// The list of choices available to the user. /// </summary> public List<ConversationChoice> choices; /// <summary> /// The tags split up into a string array. /// </summary> public string[] tagArray; public string spriteSheet; public string spriteName; public ConversationNode() { } public void Process() { ProcessTags(); ProcessImagePath(); ProcessChoices(); ProcessParams(); } private void ProcessImagePath() { if (!image.Contains("/")) { Debug.LogError("Node didn't have image path: " + title); return; } string[] pieces = image.Split('/'); spriteSheet = pieces[0]; spriteName = pieces[1]; } private void ProcessTags() { tagArray = tags.Split(' '); } private void ProcessChoices() { string[] choiceSplit = body.Split(CHOICE_DELIM.ToCharArray()); displayBody = choiceSplit[0]; if (choiceSplit.Length < 2) { return; } choices = new List<ConversationChoice>(); // Iterate through the choice elements and attempt to parse them. for (int i = 1, count = choiceSplit.Length; i < count; i++) { string rawChoice = choiceSplit[i]; if (string.IsNullOrEmpty(rawChoice)) { continue; } // Remove the end brackets rawChoice = rawChoice.Replace(CHOICE_END_BRACKET, string.Empty); string[] choicePieces = rawChoice.Split(CHOICE_ELEMENT_DELIM); // In case the structure looks odd let's log it out and skip if (choicePieces.Length < 2) { Debug.LogError("Could not parse choice for: " + title + " : " + rawChoice); continue; } choices.Add(new ConversationChoice(choicePieces[0], choicePieces[1])); } } private void ProcessParams() { string[] paramSplit = displayBody.Split(PARAM_DELIM.ToCharArray()); displayBody = paramSplit[0]; if (paramSplit.Length < 2) { return; } paramMods = new List<ConversationParamModifier>(); for (int i = 1, count = paramSplit.Length; i < count; i++) { string rawPMod = paramSplit[i]; if (string.IsNullOrEmpty(rawPMod)) { continue; } ConversationParamModifier pMod = new ConversationParamModifier(rawPMod); paramMods.Add(pMod); } } } }
STACK_EDU
I’ve invited a guest user (simulating a client) to a newly created space. I’ve also carefully shared a Project entity and by extension, related Tasks entities using Access templates. Everything works well. I can create a Board view and the user can view/comment on the Tasks. But…the guest user can Share any entity to any existing user. I don’t want to expose all users within the Fibery Workspace (our company users and eventual other clients) and I don’t want the user to be able to share entities with existing users. I just tried inviting an existing user and they received an email. Now what would happen if a client invites another client? Perhaps a guest user should not be able to share entities with anyone at all? It feels like a more serious bug to me (viewing/inviting other already existing users). I didn’t anticipate this behavior and maybe other users have not thought about this as well (but have started sharing with external users). We have discussed sharing spaces/entities with clients internally and today we did I small test simulating a guest user. So we were just lucky to find this. I don’t see the use case for a user that only has view/comment access to be able to share the entity with anyone at all so perhaps just removing the Share button would be a simple solution? Unless there exists a Public link already, perhaps then it’s fine. But not really necessary for us at least. To be clear, a user can currently share with other ‘internal’ users (those who are listed in the People space). The ability to share externally is controlled by an option in access templates: So a user cannot share with ‘anyone at all’, only with other people already in the Fibery workspace, and only with external users if they have that capability. When it becomes possible to manage access to Users, it will be possible to control who sees other users in the workspace. In this situation, you could have a user who cannot see all of the other users in the workspace, so can’t share with them. I think it is reasonable to allow users to share things they know with people they know (in the workspace). Even if it didn’t behave like that, there would be nothing to stop someone with view access from taking screenshots or copy-pasting content, and I don’t know many tools that effectively prevent that 100%. Your summary is correct, as things currently are. When we roll out the ability to control access to other user entities, then number 1 will be solved, which means that number 2 becomes irrelevant - users won’t be able to share with users they can’t see. FWIW, you should be aware that it may not be possible to create a Fibery workspace which achieves GDPR compliance, depending on your policies and the data you store, based on how Fibery currently works. For example, as it stands at the moment, deleting an entity in a workspace does not delete the history of that entity from the activity log. So it is not possible to completely eradicate all traces of an entity. This could make it very difficult for you to achieve GDPR compliance in relation to your obligation under article 17. Nope. Activity log keeps a record of its existence. Have a look at your activity log It’s true that records in the activity log will make use of the entity’s new name after it’s renamed, but it is still possible to see what the old name was in the log entry for when the name change happened:
OPCFW_CODE
Well Blogger platform is really Amazing, easy and free just like Netifly the only difference is that with static site is that you should have coding skills but you don’t need to panic if you don’t you can use free themes and use your own admin panel to create posts just like how you do it on WordPress and blogger … thanks to Forstry.io these guys make it, even more, simpler than ever. so why moving from WordPress or blogger? well, the big here is to stay healthy and get some sleep without any worries about someone trying to hack my google account or high jacking my WordPress database or sites downtime these things really suck when your a blogger who pays programmers to click a button to update your site for wp only. for blogger mmmh, I don’t trust google anymore because one day they might even say that we are shutting down a blogger, please export all your files lol know you know a lot of features and a lot of Google apps have been shut down. (no more charity work) so let get back on track and here are few simple things why I’m moving from blogger to static sites Generators. You Value Security If you use Drupal or WordPress you know what I’m talking about when it comes to security updates. the more you do great online the more John doesn’t sleep too. John will always try to hack your website to bring you down or still your emails or even monetize your website. But with static sites worry nobody in history will ever hack your site unless if they hack your gitlab or GitHub repo and delete it or merge malicious codes. Yeah! ever wondered why some sites load slow and when you over refresh they will redirect you to server error? yeah!! this is due to running thousands or database queries that will throw an error at last with a static site that’s not possible with a click you have all the files in your browser with just a single request … to test that you can invite millions of bots and visit this site and see how crazy it loads so fast . for this part, it’s really easy you will have all you need and what you can use .. you don’t need plugins that slow your site or to worry about your SEO setting your metadata and titles and paying john to do your SEO… out of the box Hugo framework theme comes with a site map Generator that will always feed Search engine what they need and on time . and when it comes to the layout you have completed what you need as a blogger but if your looking to sell stuff this is not for you try WP. If you don’t want to host your own, then you just have to hope that your host keeps its PHP and MySQL up to date, so that you aren’t exposed to those pesky security vulnerabilities that crop up every now and again. Then there is the upkeep. Make sure you’ve allotted time to manage all these dependencies. And some more time in case an updated plugin or theme breaks something. A static site, when generated, is capable of being hosted on any web server that can return HTML files (which gives you a whole bunch of options). Of course, you’ll want to take advantage of the possibilities afforded to you with a static site by finding a host that allows for things like continuous deployment, instant cache invalidation, automated deploys and more. But you can leave that to somebody else, and instead of installing, managing and updating your CMS, you can focus on developing your site and being a pro spammer lol. Have you ever updated your WordPress site and broke the entire site? or ever accidentally deleted your database table because a plugin is scoping errors or ever had the situation of missing or losing your database? well with static site forget about the database. As I’m writing this post I’m learning how to code my own theme which I will use soon.. I feel like if I will have to blog using the console will improve my coding skills and I will learn a lot of command line that will make me a good programmer . do you know how happy it is when I push my commit on gitlab ? . this will also give you ideas of Build tools that output your HTML to a particular directory on your build machine, and nearly all tools include a local web server, which allows you to check and double check your progress as you go. You have the security of knowing that your site will look exactly the same to your visitors as it does to you as a developer. well blogger is free but you have to share your revenue with a blogger like 5% don’t know much about this but with other CMS you have to pay for hosting … and your expense will differ depending on your site growth some blogger spend more than 1k per months on plugins, hosting and extensions. with static it’s free but there are some features you will pay for but not a lot like you would with hosting companies.
OPCFW_CODE
How does a kernel mount the root partition? My question is with regards to booting a Linux system from a separate /boot partition. If most configuration files are located on a separate / partition, how does the kernel correctly mount it at boot time? Any elaboration on this would be great. I feel as though I am missing something basic. I am mostly concerned with the process and order of operations. Thanks! EDIT: I think what I needed to ask was more along the lines of the dev file that is used in the root kernel parameter. For instance, say I give my root param as root=/dev/sda2. How does the kernel have a mapping of the /dev/sda2 file? Though people below cover initrd, there is little discussion about why initrd is used. My impression is that is because distributions like Debian want to use one kernel on many different machines of the same architecture, but possibly widely different hardware. This is made possible by modularizing hardware support via kernel modules. The initrd does not require much hardware support to boot, and once it does, it loads the necessary hardware modules to proceed. Elaborations/corrections to this are appreciated. You can't mount /boot without having / mounted first, since there is no /boot directory without /. Linux initially boots with a ramdisk (called an initrd, for "INITial RamDisk") as /. This disk has just enough on it to be able to find the real root partition (including any driver and filesystem modules required). It mounts the root partition onto a temporary mount point on the initrd, then invokes pivot_root(8) to swap the root and temporary mount points, leaving the initrd in a position to be umounted and the actual root filesystem on /. What if you don't have an initrd like LFS (linuxfromscratch.org)? @Mr. Shickadance: Not having looked into how LFS does things, I would guess that they make sure the kernel has all necessary modules compiled into it (or loaded via GRUB 2, which possibility is new enough that not many distributions have noticed it yet) so it can start on the actual root partition. @mr. Shickadance. It's not just LFS that doesn't have an initrd. Anyone who compiles their own kernel has the option of not using an initrd, which is what I do on Gentoo. Will the possibility of loading grub2 modules make initrd obsolete? I believe grub2 modules can be used to load hardware support, but I'm not really familiar with either initrd or grub2. See my comment to the original question above. @Faheem: grub2 modules are not the same as kernel modules. I see some ability for grub2 to load kernel modules, but one thing I don't know is whether that will work for the Linux kernel or only for *BSD (where the bootloader loading kernel modules is normal). I suspect the kernel needs to be taught where to find the address map for loaded modules, and everyone needs to move to grub2 (grub1 is still standard in some distributions). The initrd has been replaced by initramfs, since pivot_root was regarded as a dirty hack. That's incorrect, since initrd is optional. Linux can directly mount file systems detected during boot, as long as the necessary drivers are built into the kernel and not configured as kernel modules. pivot_root is unnecessary for booting. In ancient times, the kernel was hard coded to know the device major/minor number of the root fs and mounted that device after initializing all device drivers, which were built into the kernel. The rdev utility could be used to modify the root device number in the kernel image without having to recompile it. Eventually boot loaders came along and could pass a command line to the kernel. If the root= argument was passed, that told the kernel where the root fs was instead of the built in value. The drivers needed to access that still had to be built into the kernel. While the argument looks like a normal device node in the /dev directory, there obviously is no /dev directory before the root fs is mounted, so the kernel can not look up a dev node there. Instead, certain well known device names are hard coded into the kernel so the string can be translated to the device number. Because of this, the kernel can recognize things like /dev/sda1, but not more exotic things like /dev/mapper/vg0-root or a volume UUID. Later, the initrd came into the picture. Along with the kernel, the boot loader would load the initrd image, which was some kind of compressed filesystem image (gzipped ext2 image, gzipped romfs image, squashfs finally became dominant). The kernel would decompress this image into a ramdisk and mount the ramdisk as the root fs. This image contained some additional drivers and boot scripts instead of a real init. These boot scripts performed various tasks to recognize hardware, activate things like raid arrays and LVM, detect UUIDs, and parse the kernel command line to find the real root, which could now be specified by UUID, volume label and other advanced things. It then mounted the real root fs in /initrd, then executed the pivot_root system call to have the kernel swap / and /initrd, then exec /sbin/init on the real root, which would then unmount /initrd and free the ramdisk. Finally, today we have the initramfs. This is similar to the initrd, but instead of being a compressed filesystem image that is loaded into a ramdisk, it is a compressed cpio archive. A tmpfs is mounted as the root, and the archive is extracted there. Instead of using pivot_root, which was regarded as a dirty hack, the initramfs boot scripts mount the real root in /root, delete all files in the tmpfs root, then chroot into /root, and exec /sbin/init. After the chroot, is the tmpfs automatically unmounted? Does it just disappear? @jiggunjer, no, it is still there, it is just empty ( aside from containing the /root directory ), and no longer used. I learned something new about every iteration of root fs that you mentioned. Great answer! Sounds like you're asking how does the kernel "know" which partition is the root partition, without access to configuration files on /etc. The kernel can accept command line arguments like any other program. GRUB, or most other bootloaders can accept command line arguments as user input, or store them and make various combinations of command line arguments available via a menu. The bootloader passes the command line arguments to the kernel when it loads it (I don't know the name or mechanics of this convention but it's probably similar to how an application receives command line arguments from a calling process in a running kernel). One of those command line options is root, where you can specify the root filesystem, i.e. root=/dev/sda1. If the kernel uses an initrd, the bootloader is responsible for telling the kernel where it is, or putting the initrd in a standard memory location (I think) - that's at least the way it works on my Guruplug. It's entirely possible to not specify one and then have your kernel panic immediately after starting complaining that it can't find a root filesystem. There might be other ways of passing this option to the kernel. This is the right explanation when there's no initrd/initramfs, but it's missing a piece of the puzzle. Normally the kernel identifies a device such as /dev/sda1 because it's an entry in a filesystem. You could do cp -p /dev/sda1 /tmp/foo and /tmp/foo would represent the same device. On the kernel command line, the kernel uses a built-in parser that follows the usual device naming convention: sda1 means the first partition of the first SCSI-like disk. @Gilles so modern kernels still can't handle mounting a volume based on UUID? without initrd or initramfs I mean. It has to be a "simple" partition in the /dev/sdx form? @jiggunjer Modern kernels do support searching for a volume by UUID. See init/do_mounts.c. Grub mounts the /boot partition and then executes the kernel. In Grub's configuration, it tells the kernel what to use as the root device. For example in Grub's menu.lst: kernel /boot/linux root=/dev/sda2 C'mon, GRUB doesn't "mount" /boot, it just reads 'menu.lst' and some modules, it isn't part of LINUX kernel either. When you call the kernel, you will pass a "root" argument with the root partition. At worst, the kernel knows that just /boot has been mounted (LOL). Next: geekosaur is right, Linux uses an initial ramdisk in compressed image format, and then mounts the real root filesystem by calling pivot_root. So Linux starts running from an image, and then from your local disk drive. Grub most definitely does have the ability to 'mount' a filesystem, especially in grub2. Of course, all it's capable of /doing/ with it is look for bootable kernels of one stripe or another, but that's still mounting. Also, linux doesn't require an initrd unless your kernel compiled drivers crucial to your hard drive as modules. http://www.ibm.com/developerworks/linux/library/l-linuxboot/ This is a fairly concise summary of what the Linux Kernel does when booting. @Shadur, from the mount manpage: All files accessible in a Unix system are arranged in one big tree, the file hierarchy, rooted at /. These files can be spread out over several devices. The mount command serves to attach the file system found on some device to the big file tree. - Since filesystems used by GRUB aren't attached to the file hierarchy, it's NOT mounting. @Shadur, BTW: It's obvious that initrd isn't necessary since it's just another root filesystem, but it's generally used as small boot-time root, since kernel loads the necessary to boot, then boots, and finally loads everything else. @d4rio They're mounted by GRUB, not linux -- it gets easier to comprehend when you regard grub as a microkernel OS of its own rather than just a bootloader. @Shadur, the main difference is that GRUB isn't a microkernel OS, it's just a bootloader. The process of mounting in Linux is part of the kernel, you can find it in sys_mount() in fs/namespace.c and fs/super.c. They're used by GRUB but then mounted by the Kernel, since it's NOT started when GRUB reads menu.lst, and it needs to boot before filling it's mounted partition tables, registered filesystem types, etc, etc, etc... @jsbillings: Nice link! The boot loader, be it grub or lilo or whatever, tells the kernel where to look with the root= flag, and optionally loads an initial ramdisk into memory via initrd before booting the kernel. The kernel then loads, tests its hardware and device drivers and looks around the system for what it can see (you can review this diagnostic info by typing dmesg; nowadays it likely scrolls by way too fast to see) then attempts to mount the partition mentioned in the root= parameter. If an initrd is present, it's mounted first and any modules/device drivers on it are loaded and probed out before the root filesystem is mounted. This way you can compile the drivers for your hard drives as modules and still be able to boot.
STACK_EXCHANGE
I made a short demo with three notes. Tap the screen to add a red dot in a row. When play is tapped the transparent black lines moves across the screen and a notes played when it's x position is equal to the red dots x position and the dots y position determines what tone is played. I'm too busy working on other projects so anyone is welcomed to add to and finish this project by adding the other notes. @Rawrbear This might interest you. Try this new link I made a short demo with three notes. I like it! Good Job! This Project Is Going to be awesome! Cool! But it lags very bad and then when it plays the note, it sounds to note over and over again with static....But it is a great concept :D There's no bar now but the notes are played. The problem is when the screens tapped to make a clone sometimes 2,3,4 clones are made at the tapped area instead of just one. I faced this problem a while ago but forgot the solution to this problem. Can either of you remix this and fix it so only 1 clone is made when the screens tapped. This seems to be the only problem. There's no lag so far with the change in code I did to remove the bar. Each clones speed is set at it's X location when made. I made a SOUND value that starts at 0 when PLAY is tapped and increases in value till it reaches 1024. When the SOUND value equals the cloned dots SPEED value, the clones y position is checked and a notes played. @oio can you fix the problem so only 1 clone is made when the iPad's tapped? I tried wait 200ms before making clone and wait 200 ms after making clone and it hasn't fixed the problem of multiple clones stacking when the iPad's tapped. When there is 2 sounds at the same time it lags with static and it freezes my iPad! Thats probably because 2 or more clones are made when the screens tapped instead of just 1. The first link works better i think. Sorry if it's too late, I'm just now seeing this. Prior to looking at the code, a suggestion that I would make is that you only create a clone of the original, base-level object, in the "when_ is tapped" (or whatever) rule. One way that I do that, routinely, is to use an "if" statement, in order to identify a specific attributes of the base object. For example, I will often set the invisibility of the base object to 100%. Then, on a tap event or some other event, I will put the "create a clone" block inside of an "if invisibility% = 100". I believe the same approach can be used with "speed" or even position to uniquely identify the one object that you wish to clone. Hopefully, that made some sense. Thanks. I totally forgot about the clones making clones of themselves. Because I use Tynker allot. I did as you said having the original invisible and checking 100% invisibility before a clone is made. This works perfect now at the moment. Now I have to add the other tones. I'll post a link when it's ready. I am glad to hear that the problem was relatively straightforward to fix and that the suggestion helped. A pic of current progress All that's left to do is choose a sound effect for the second sound beat. And maybe some thin vertical lines every 25 pixels for a visual aid The first sound beat is a CLAP This has got to hit game changers! Thanks for encouragement, but I didn't find this difficult to do except when I forgot about clones cloning themselves. I've had a small draft for a long while now sitting idle and decided to use it for this project. Yep. I know. I checked it out, about one minute after you published it. That's so awesome! It should get game changers or featured! i'm concerned somethings gone wrong. i don't have my ipad at the moment. can u test it out to see if only the amount of taps you make are played. example tap the screen 13 times to place 13 dots. can you play and listen if more than 13 sounds play. Thanking you if you can. please report back here if somethings gone wrong I'll do it asap! (Winky face challenge)
OPCFW_CODE
10-22-12, 01:44 PM There are several threads noticed in the RWD froum that I can't find anymore. Nothing comes up in a search of keywords, when i look through my list of previous comments, tons of them are missing. It's like tons of threads with valuable information have vanished. Did I miss something?? Right in the middle of replacing a heater core, went looking through here for an old thread that I was going to use for referance, and I can't find it to save my life. What's going on? 10-22-12, 02:08 PM To the best of my knowledge no threads have been removed from RWD. Open your profile and search your recent posts list. Try to google something like "cadillac forums rwd xxx xxxx xxxx xxxx" where xxxx is a bunch of keywords after the car model. 10-23-12, 09:54 AM I would recommend using Google search to find something here anyway... 10-23-12, 10:18 AM :yeah: The search feature here does not seem to work very well Just add "Cadillac Forums" to your search in Google 10-25-12, 09:04 PM Ben, no threads are gone. The search just isn't great. 10-28-12, 12:14 AM if you use yahoo(may work for other search engines) use "site:www.cadillacforums.com" and add your search terms. It will only search Cadillac forums. site:www.cadillacforums.com RWD heater core 11-05-12, 12:59 PM Thanks for the tips. I was able to find the thread for the heater core I was looking for, and another old one I'd missed. However, it's still quite an inconvenience that I can't even get a complete list of threads I've started myself, even looking in the user CP & searching for "threads started by Benzilla". That's the biggest problem. To the best of my knowledge, no external search engine can return results by original poster. Therefor, there are still things that I've posted; that I can't see; than can not be searched for. Not to whine, but this is a problem. A *NEW* problem at that.. If anyone knows some workarounds not listed, that I don't know, please advise. 11-05-12, 02:15 PM Ben, I was able to pull up 48 threads back to Oct. 08. Is that something you can't get or did you need further back? 11-05-12, 05:24 PM I also get 48, going back to my first post. But there are randon threads started between then and now that are nowhere to be found without doing a custom external search of keywords. This includes a couple of pretty lengthy project threads. It's really weird. 11-05-12, 06:08 PM all us old guys are coming back 11-05-12, 11:10 PM Oh. I was hoping it was just a matter of having someone else pull them up if you couldn't. Oh well.
OPCFW_CODE
Yesterday, which is a Sunday, was a rather eventful day. It is a day in history for apparent reasons, though I am here to document and summarize what happened in my small startup life. I launched my first product on producthunt! There is a lot of learning that I can share with you from the launch. Product hunt operates on a 24-hour interval. The clock resets at 00:00 pacific time (3 am in New York, where I live). All launched products during the 24 hours will show up under the Newest tab. The highest-voted product will show up in the Popular tab, which is the default tab that brings massive traffic. A note-worthy rule is that every product shares the same 24-hour interval. If your product is launched at 23:59 pacific time, it only has 1 minute of exposure. My original idea was to launch in the morning. I assumed people that I can reach is in a similar timezone as I am. I should promote during my daytime when my audience is active on Twitter or checking their Emails. Launching at 3 am is very counter-intuitive for me. I honestly forget the 3 am in New York would be 9 am somewhere else in the world. I should have had a global mindset when I planned for launch. The cryptocurrency market never closes, so is launching a product. While producthunt can brings traffic, it doesn’t convert to paying customers. Its visitors are not in the shopping mindset. They are in a discovery or research mindset. While I had sales through producthunt, it all came from direct audiences of my cofounder’s or mine. It didn’t help that my product was the only two that require payment on that day. The majority of launched products are free or offers a free tier. It is better to offer a free option and use producthunt as a channel to get exposure rather than paying customers. Our product got 34 upvotes in total but didn’t get to the popular tab, while products with fewer votes did. I think that is due to the ranking algorithm. Votes from accounts with a long history are weighed more, while votes from freshly-opened accounts are weighed less or even penalized. If you are planning a producthunt launch, I suggest reaching out to a few producthunt users with a long history and securing their vote upfront to get your product on the Popular tab. I think this is a good learning experience. I was very fearful to launch on producthunt before because I was overthinking it. This experience debunks many misperceptions about launching on producthunt and gives me some ideas of how to run a successful launch. I need to learn by practicing, be it making sales or launching a product on an unfamiliar platform. Thanks for tuning in! If you could share my newsletters with friends who may enjoy the content, I would really appreciate it! <aside> <img src="https://wentin.github.io/feather/icons/twitter.svg" alt="https://wentin.github.io/feather/icons/twitter.svg" width="40px" /> Share on Twitter <aside> <img src="https://wentin.github.io/feather/icons/facebook.svg" alt="https://wentin.github.io/feather/icons/facebook.svg" width="40px" /> Share on Facebook <aside> <img src="https://wentin.github.io/feather/icons/linkedin.svg" alt="https://wentin.github.io/feather/icons/linkedin.svg" width="40px" /> Share on Linkedin <aside> <img src="https://wentin.github.io/IconPark/source/Arrows/arrow-left.svg" alt="https://wentin.github.io/IconPark/source/Arrows/arrow-left.svg" width="40px" /> Previous Post Made Money While I Slept <aside> <img src="https://wentin.github.io/IconPark/source/Arrows/arrow-right.svg" alt="https://wentin.github.io/IconPark/source/Arrows/arrow-right.svg" width="40px" /> Next Post
OPCFW_CODE
This is the first post on automated integration testing in this series but it will not be the last. In this first post we'll look at how to use the Microsoft.AspNetCore.TestHost package to run integration tests of our API. In future posts I want to take a look at SpecFlow (because I love it) and also scripting a short-lived test environment with its own short-lived database which is torn down after the test run. In the last post we stored the database user password securely as a Docker Secret. In this post we'll look at Docker Configs which uses the same model as the secrets except that files are mounted into the container file system unencrypted. Why use Docker Configs instead of the other alternatives? Let's quickly compare Docker Configs with other options. Working with secrets correctly in todays security environment is crucial. Storing database passwords and API keys in plain text files or in the source control system is just plain reckless. There are a few good options out there and today we are going to look at Docker Secrets. What I really like about Docker Secrets are that it is so simple to use. Complexity is the enemy of security and with Docker Secrets being so simple it is hard to mess it up. So far we have a relatively simple ASP.NET Core 2.0 Web API that runs directly on the Windows operating system. But the next few posts in the series are going to need Docker. We'll be using Linux containers and looking at configuration, secrets management, and adding a second API. We'll be looking at using Docker Secrets, Docker configuration files, using Hashicorp Vault and creating our first Swarm with two services. So in this post we'll look at the built in Docker support in Visual Studio 2017, look at the various files that get added and what they do. In the last post we added a short description of the API using markdown. The Description property of the Info class gets rendered at the top of the page above the list of actions. In this post we are going to take that a bit further. One problem with putting documentation in your Swagger docs is that if you put a lot of detail there then you are forced to scroll down every time you view the Swagger UI of your API. This can get pretty annoying quick. So we'll look at a trick for adding as much detail as you want but avoiding the scrolling issue. Swagger acts as both machine and human readable documentation for your APIs but also via the Swagger UI offers you a way of interacting with your APIs easily. We are going to embed a Swagger UI in our APIs that will load when you press F5 making it hassle free to test your API during development and testing. We are going to add Swashbuckle.AspNetCore to our ASP.NET Core 2.0 API to create this embedded Swagger UI. - API Series Part 1 - Let's Build Something - API Series Part 2 - Documentation - Swagger - API Series Part 2b - Add Non-Intrusive Markdown to Swagger UI - API Series Part 3 - Adding VS2017 Docker Support - API Series Part 4 - Secrets Management with Docker Secrets - API Series Part 5 - Configuration with Docker Configs - API Series Part 6 - ASP.NET Core 2.0 and Integration Testing In this series we'll build a set of APIs with ASP.NET Core 2.0 in a cloud native, microservices architecture that can satisfy a long list of non-functional requirements. So what does cloud native actually mean? It means that you don't know where your applications are going to run anymore. Your applications need to be able to work out how they are configured and integrate themselves into the wider system of services and start working. Why is that a desirable thing? Because it means that we can stand them up and down at will, without need for special system configurations. That means we can create copies of them for scaling out and we gain resiliency because when one copy of an application dies we just start up another one.
OPCFW_CODE
Besides building and supporting the UK Earth System Model and providing science-based guidance on future Earth system change, the activities of the UKESM project include outreach and engagement. In this vein, our group has presented at events such as the Blue Dot Music and Science Festival and the Royal Society Summer Exhibition, describing how our work contributes to studies of climate change and its implications for everyday life. The Met Office (which is one of the partners in the UKESM project) also has a strong outreach programme, which has a particular emphasis on educating young people about the wide-ranging impacts of weather and climate change. The programme provides resources for lessons in schools and, through its STEM (Science, Technology, Engineering and Mathematics) Ambassador activity, aims to increase young people’s awareness of Met Office science and encourages interest in STEM careers at the Met Office and further afield. In addition, the Met Office receives requests from external groups such as the u3a, Probus, Women’s Institute, Rotary and science clubs to provide a speaker for one of their meetings. The informality of the setting suggests a talk which, whilst informative, is also entertaining. The Met Office doesn’t charge a speaker’s fee; instead, the group is asked to make a donation to its corporate charity, which is currently Surfers Against Sewage. I joined the speaker’s roster shortly after arriving at the Met Office in 2013 – mainly as an incentive for me to find out more about its activities before telling people about them. Since then, I’ve given several of these talks to groups up and down the country and have found this opportunity to engage with members of the general public about our work to be stimulating and – occasionally – surprising. The talk describes the origins of the Met Office (see Figure 1), how the weather is forecasted and our work in climate science. The emotional link that people have with the weather is always a feature of questions and comments from the audience, although I no longer feel I am being held personally responsible for the – increasingly rare – occasions on which the forecast has proved to be inaccurate or misleading. The name of Michael Fish (who famously discounted the possibility of a hurricane in his forecast on the BBC a few hours before the Great Storm of 1987 hit the UK) invariably comes up, although this can be used as part of a discussion of the improvements which have been made in observations and modelling in the ensuing thirty-odd years (Figure 2). Figure 1. The Met Office was set up under Robert Fitzroy (top left) following the loss of The Royal Charter in a violent storm in 1859 (right). Fitzroy was responsible for the first weather forecasts (bottom left), a term he originated. Figure 2. The accuracy of the 1-day forecast can be measured as the difference between the value of a forecasted quantity (such as mean sea level pressure, shown here) and the observed value of that quantity a day later. A decreasing difference (note the direction of the y axis) corresponds to increased accuracy. As the period of the forecast increases, it becomes less accurate, but all forecasts are improving over time; in fact the accuracy of the 2-day forecast today is about the same as that of the 1-day forecast a decade ago. Part of those improvements are due to increased compute power. For the first hundred years of the Met Office’s existence, the forecast was calculated by hand (as in the example shown in Figure 1). Indeed, the word “computer” originally denoted a person who computes; when electronic devices were developed mid-way through the last century, they were distinguished from the people by the addition of the “electronic” modifier to their names. Those people working on the original forecasts would – like all of us – have taken around a second to perform a single calculation. This puts into perspective the speed of the current Met Office computer, which can perform twelve thousand million million calculations per second. The resultant improvement in forecasts also helps explain why the Met Office is about to move to a newer, faster machine (which the members of the audience have paid for, however involuntarily). When I started giving these talks eight years ago, I had the impression that audiences were more interested in the weather than in the climate: thus for example, I was told by a member of the audience at my very first presentation that global warming was simply a hoax dreamed up by the government in order to raise his taxes. That focus has shifted over the years, which I think reflects the public’s increased awareness of climate change and its implications (in addition, it might be that as the reliability of the forecast has improved it has started to be taken more and more for granted). Climate change is a scientific subject in which the public has become more interested (at least compared to other areas of science) because of its connections to human activities and implications for society. The scale of the problem (both spatial and temporal), together with the amount of commitment – at levels ranging from the personal to the national – required to mitigate or adapt to its effects, can be daunting (Figure 3). A sustained degree of public engagement and understanding is one of the requirements for mitigation and adaption efforts that are successful. Presenting results which have been recently determined using our model (Figure 4) illustrates some of the aspects of climate science, and conveys our concerns about what is likely to happen in the future. Figure 3. Each year is assigned a colour (on a blue-red scale) corresponding to its average UK temperature. This provides a striking illustration of the way in which the UK has been getting hotter in recent years. Figure 4. Climate models (in this case UKESM1) simulate the behaviour of the Earth’s climate and calculate variables of interest (in this case near-surface temperature) over the historical period and for different projections into the future. Temperature is plotted as the difference from its global average for 1850-1900, which focusses attention on changes since that period. Prior to the COVID-19 pandemic, I was giving a few of these talks every year. The last talk I gave in person was to a group in Glasgow at the beginning of 2020, on the day after Storm Ciara (Figure 5) caused significant damage to the UK – in particular, major disruption to rail services. I might say that the talk appeared to go down very well indeed, although I think that was partly because the group couldn’t believe someone had made a 16 hour round trip for the sake of an hour’s meeting. The lockdown has meant that subsequent talks have had to be delivered remotely (usually via Zoom or Teams). Like so many of the changes which we have been forced to embrace in the past year, there are positive aspects to this one: specifically, that of accessibility and time saved through not having to travel (see above). In addition, direct communication between groups has led to recommendations for speakers and, accordingly, the frequency of talks has increased recently so that I am now giving one every couple of weeks. It remains to be seen how much of this new mode of delivery and engagement will be retained as we emerge from lockdown. Figure 5. A few of the effects of Storm Ciara in the UK (railway from Exeter to Glasgow not shown).
OPCFW_CODE
Mark S. Miller erights at google.com Tue Jun 9 15:11:26 UTC 2015 On Tue, Jun 9, 2015 at 7:58 AM, C. Scott Ananian <ecmascript at cscott.net> > Mark: I outlined two of these use cases in > One is `WeakPromise` which is a promise holding a weak reference to its > resolved value. This is the closest analogy with the canonical Smalltalk > motivating example for species. > Another is `TimeoutPromise` which is a promise that rejects in a fixed > time if not resolved before then. > Both of these would set `WeakPromise[Symbol.species] = > TimeoutPromise[Symbol.species] = Promise;` so that the weak > reference/timeout is not "contagious" across `then()`. Ok, thanks for the example. I understand your rationale. This is what I asked for but not what I was really looking for. Can we come up with an example where class and species usefully differ where neither is Promise? IOW, where the species a proper subclass of Promise and a proper superclass of the class in question. Btw, Allen's conclusion sounds right to me. But I'd like to see such an example, or see us fail to find one, before considering the what-it-should-be-in-the-absence-of-timing issue settled. Thanks. Given the timing, unless the case is very strong, which you clearly doubt as well, I'm sure we won't actually revisit this for ES6, and thus never. > `WeakPromise.all([...])` reads like it should return a `WeakPromise` (ie, > ignore species). This is consistent with the "static methods ignore > species" rule. > *However*, `WeakPromise.resolve([....]).all()` makes it clearer that weak > reference held by `WeakPromise` is actually just to the initial array, and > that the result of `WeakPromise.prototype.all` should in fact be a > `Promise` (ie, honor species). > Similarly for `TimeoutPromise`: `TimeoutPromise.resolve([...]).all()` > implies a timeout to resolve the initial array-of-promises, but the result > of the `all()` is a normal Promise and won't timeout. But on the other > hand, you wouldn't be wrong if you thought that `TimeoutPromise.all([...])` > should timeout the result of the all. > So, to me it seems like if you think that `Promise.all(x)` is shorthand > for a future `this.resolve(x).all()`, then honoring the species is not a > terrible thing. In ES7-ish, when you allow `Promise.all` and > `Promise.race` to accept Promises as their arguments (in addition to > iterables), then you could clarify the spec to indicate that the species is > ignored when resolving the Promise argument, then honored when processing > the `all` or `race`. > But if you want a crisp semantics and weren't afraid of more last-minute > changes to the Promise spec, then you'd define `Promise.all` and > `Promise.race` to ignore species, and handle the species issue in ES7 when > you define `Promise.prototype.all` and `Promise.prototype.race`. > IMO, at least. -------------- next part -------------- An HTML attachment was scrubbed... More information about the es-discuss
OPCFW_CODE
How are BUY orders matched to to seller/s If I put a BUY order in for 500 shares will it execute only if it matches one sell for 500 shares or will it accumulate smaller sell orders up to 500 then execute? Generally speaking. (There are some pretty exotic order flags but generally this is how it works) There are two parts of an order as far as a matching engine is concerned, there is a market maker and a market taker; this is different than buyer and seller. Lets run through a couple examples. Beginning: a buyer places an order with their highest bid, 100 shares for $12.34. a seller places an order with their lowest ask, 100 shares for $12.40. These two orders don't fill because they are $0.06 apart. They sit on the order book as the current market makers. One is making a market for sellers, the other is making a market for buyers. Order 1: Should a buy order come in for 50 shares at $12.40, surpassing the existing high bid, and priced to the existing low ask, it will "take" from the market maker (assuming that 100 shares at 12.40 isn't flagged with all or none, or fill or kill or some other more exotic order flag.) After this order the published bid/ask will be 100 shares at $12.34 50 shares at $12.40 Order 2: If a new order comes in to sell 20 at $12.35 it would not get filled; it's still above the current high bid. But it's below the current low ask so the order book will adjust to: 100 shares at $12.34 20 shares at $12.35 (with the 50 shares at $12.40 still on the book, but not the current low anymore) Order 3: If a "market" order came in to buy 50 shares at market, then it will fill the 20 shares at $12.35 and 30 shares at $12.40. Market orders are dangerous because, generally, in retail markets you don't know where the next shares are priced. After the first 20 published the next ask might be at $15, and your order will fill 30 shares at $15 or potentially even higher. The order book now looks like: 100 shares at $12.34 20 shares at $12.40 Alternate Order 3: Let's say alternatively that a limit order came in to buy 50 shares at $12.35. This order would take the 20 shares at $12.35 then become the new market maker/high bid for the remaining unfilled shares. The order book now looks like: 30 shares at $12.35 50 shares at $12.40 This is general market orderbook mechanics. Just know, there is a market maker, and a market taker for both buy and sell sides of the transaction. Most retail securities brokers don't price trade commissions differently for market makers and market takers. I've seen maker/taker pricing in crypto markets, generally a market maker will pay a lower commission. Maker-taker fees (equities) were started by the ECNs about 20 years ago. Makers are paid small fee (a fraction of a penny per share) when they add liquidity to the order book when placing a limit sell order above the bid or a buy order below the ask (and order executed).Takers are those who buy and sell at the market, removing liquidity from the order book. Takers are charged the small fee by the exchange. A good broker passes the maker rebate on to the trader. Some exchanges have an inverted taker-maker model and some imply that this is to lure orders in for high frequency traders. If you have set it as an all-or-nothing order, it will only fill if it can find a match (i.e. a seller) willing to sell at that price (or lower) for the total number of shares you are asking for. Otherwise, then it will piece-meal the order together. Example: You place an BUY order for 500 shares at $10.00 The SELL book looks like so: 100 shares at $9.84 200 shares at $9.85 300 shares at $9.90 Your order will be filled for 100 shares @ $9.84, 200 at $9.85, and 200 at $9.90.
STACK_EXCHANGE
package it.polimi.se2019.server.cards.weapons; import it.polimi.se2019.server.actions.ActionUnit; import it.polimi.se2019.server.cards.Card; import it.polimi.se2019.server.games.player.AmmoColor; import java.util.HashMap; import java.util.List; import java.util.Map; /** * A weapon is a card that has a few more functionality, in particular if can be loaded, it has a cost tobe reload and to get it. * And can have some optional action units. * * @author FF * */ public class Weapon extends Card { private boolean loaded; private List<AmmoColor> pickUpCost; private List<AmmoColor> reloadCost; private List<ActionUnit> optionalEffectList; /** * Default constructor, by default a weapon is loaded * * @param actionUnitList the list of action units * @param name the name * @param pickUpCost the cost to pickup * @param reloadCost the cost to reload * @param optionalEffectList the list of optional effects * */ public Weapon(List<ActionUnit> actionUnitList, String name, List<AmmoColor> pickUpCost, List<AmmoColor> reloadCost, List<ActionUnit> optionalEffectList) { super(actionUnitList, name); this.loaded = true; this.pickUpCost = pickUpCost; this.reloadCost = reloadCost; this.optionalEffectList = optionalEffectList; } /** * The loaded status getter * * @return loaded status * */ public boolean isLoaded() { return loaded; } /** * Sets the loaded status * * @param loaded new loaded status * */ public void setLoaded(boolean loaded) { this.loaded = loaded; } /** * The pickup cost getter * * @return pickup cost * */ public List<AmmoColor> getPickUpCost() { return pickUpCost; } /** * Sets the pickup cost * * @param pickUpCost new pickup cost * */ public void setPickUpCost(List<AmmoColor> pickUpCost) { this.pickUpCost = pickUpCost; } /** * The reload cost getter * * @return reload cost * */ public List<AmmoColor> getReloadCost() { return reloadCost; } /** * Sets the reload cost * * @param reloadCost new reload cost * */ public void setReloadCost(List<AmmoColor> reloadCost) { this.reloadCost = reloadCost; } /** * The optional effects list getter * * @return optional effects * */ public List<ActionUnit> getOptionalEffectList() { return optionalEffectList; } /** * Sets the optional effects list * * @param optionalEffectList new optional effects list * */ public void setOptionalEffectList(List<ActionUnit> optionalEffectList) { this.optionalEffectList = optionalEffectList; } /** * The pickup cost getter as a map * * @return pickup cost * */ public Map<AmmoColor, Integer> getPickupCostAsMap() { return convert(this.pickUpCost); } /** * The reload cost getter as a map * * @return reload cost * */ public Map<AmmoColor, Integer> getReloadCostAsMap() { return convert(this.reloadCost); } /** * The helper that converts a list to a map * * @param ammo the list of ammo * @return the map of ammo * */ private Map<AmmoColor, Integer> convert(List<AmmoColor> ammo) { Map<AmmoColor, Integer> convertedAmmo = new HashMap<>(); for (AmmoColor ammoColor : ammo) { // initialize to zero if absent convertedAmmo.putIfAbsent(ammoColor, 0); // +1 convertedAmmo.put(ammoColor, convertedAmmo.get(ammoColor) + 1); } return convertedAmmo; } }
STACK_EDU
We would be specifically looking at couple of event logs and the troubleshooting will be based on that. So the error event IDs are these ID 6482 and ID 6398. The context is the sharepoint 2013 backup configured with CommVault is partially failing. The failed databases are related to a search service application. You can perform a manual backup from the central administration console for the specific databases. Event ID 6482 says.. Application Server Administration job failed for service instance Microsoft.Office.Server.Search.Administration.SearchServiceInstance (1ab997dc-406d-49fd-a5bb-aa688c67ff4e). Reason: An update conflict has occurred, and you must re-try this action. The object SearchDataAccessServiceInstance was updated by Servername\svc-sp-Farm, in the OWSTIMER (18132) process, on machine Server-SP01. View the tracing log for more information about the conflict. Event ID 6398 says.. The Execute method of job definition Microsoft.Office.Server.UserProfiles.LMTRepopulationJob (ID 19bb8b55-2de1-4aee-af1c-12704fe87a27) threw an exception. More information is included below. An update conflict has occurred, and you must re-try this action. The object LMTRepopulationJob Name=User Profile Service Application_LMTRepopulationJob was updated by Server-SP\svc-sp-Farm, in the OWSTIMER (18132) process, on machine MSU-SYD-SP01. View the tracing log for more information about the conflict. So here are the step-by-step verifications we need to perform. - Make sure the Sharepoint Administration and Sharepoint timer services are configured with the same service account.(f any deviation, correct it and restart both the services.) - Login to Sharepoint SQL server and verify if the failed databases are configured with correct permissions.That means the assigned service account (Farm service account) has the DB owner and the Backup operator permissions configured on the failed databases. 3. Finally we need to clear the config cache. Follow the steps to perform cache folder cleanup. - First Stop the timer service by launching services.msc - Open the folder %SystemDrive%\ProgramData\Microsoft\SharePoint\Config\ then delete all the XML files in the config cache folder. (Make sure you don’t delete the folder and a file named Cache.ini file) - Copy the Cache.ini file and save it under a secure folder and then Edit the cache.ini inside the config folder, deleted all the contents inside the file and just type 1 and save the file.By doing this you are notifying the services that all cache settings need to be refreshed. Eventually this value 1 will be updated to another number when the cache is regenerated. Finally you can initiate a backup job from the Sharepoint admin console and verify the backup status.
OPCFW_CODE
~~~ |||||||||||||||||||||||||||||||| || ST DISK DRIVE CONVERSION || || SINGLE TO DOUBLE SIDED || |||||||||||||||||||||||||||||||| The single sided Atari SF354 disk drive can be converted to a double sided drive for $94. It is completely equivalent to the SF314 except that it uses far less power. The SF354 contains an Epson SMD130 drive and the SF314 contains an Epson 140 drive. In addition, both drives contain a connector board at the rear of the drive housing which interfaces the Atari cables to the headed sockets which plug into the Epson drive. The boards also have jumper wires which tell the 520ST what type of drive is connected. The cases for both drives are identical (except for the SF354/SF314 marking on the outside.) There are eight Epson SMD-100 series disk drives. The SMD-130 and SMD-170 are interchangable single sided drives. Similarly, the SMD-140 and SMD-180 are interchangable double sided drives. The difference is that the SMD-130 and SMD-140 are intended for AC powered equipment and consume 1.3W on standby and 6.9W on read/write. The SMD-170 and SMD-180 are designed for use with both AC and battery powered equipment and use 0.3W on standby and 2.9W on read/write. A good source for the Epson SMD-180 drive is: Halted Specialties Co. Inc 827 E. Evelyn Avenue (408) 732-1573 Sunnyvale, CA 94086 The cost is $89 plus $5 shipping. They accept phone orders using a credit card and ship via UPS. Since the cost of a SF314 is about $219 the conversion results in a considerable saving. The only problem is what do you do with the old single sided drive? To convert the drive, proceed as follows: Remove the four screws around the perimeter of the SF354 disk drive and gently lift the rear of the cover while lightly pressing in the disk connector sockets at the rear. The sockets and switch should should pop free and then the top can be unhocked from the disk active LED and disk eject switch at the front. Carefully unplug the two socket connectors between the interface board and the rear of the SMD-130. Use a small, flat bladed screwdriver to gently and evenly pry them free. Looking at the top of the board in the lower left-hand corner is a place for a jumper wire marked W1 between locations SG and FG. Connect a piece of wire between these point and solder it in place. Turn the board over and rotate 180 degrees. Find the four parallel jumper wires on the right hand side. Remove the first and third wires, either by cutting them away or unsoldering them. This completes the modifications to this board. 1 3 1 3 | O O O O | | O SG- : new jumper x L | x L | | | : W1 x remove jumper x 2 | x 1 | | | : J5 J6 | existing jumper O O O O | | O FG- 2 4 2 4 | At this point, you have to decide how functional you want the drive active LED to be. You will probably have noticed that the disk active LED is on the left front on the SMD-180 and on the right front on the SMD-130. You have three choices. a) Forget about it and use you ears to tell you when the drive is active; b) Drill a small hole through the plastic front at the location of the SMD-180 LED; c) Unsolder the LED on the SMD-180, extend it on wires to the SMD-130 location and epoxy it in place behind the old LED window. I used clear epoxy with a small piece of silver foil as reflector to achieve sufficient LED brilliance. I did not change LEDs as I suspect the SMD-180 LED has a far lower driving current. To remove the LED, I had to remove the two screws holding the board, the two cables pluged in by the stepper motor, tilt the board up and use a solder sucker to get it out. If you're willing to do this, you don't need further instructions! Remove the three screws on the bottom of the disk drive case and lift off the SMD-130. Remove the two screws holding on the RFI shield and slide it off to the rear. Now slide it onto the new drive and put the two screws back in place. Use a small Phillips screw driver (about 1/8" diam) to loosen the two screws holding on the plastic disk case front from the SMD-130. They are accessable from the top looking vertically straight down just behind the plastic front. Once the screws are completely free, gently lift the plastic front off the SMD-130 taking the screws along. Look behind the eject button and note that it is attached by two plastic hooks through a rectangular hole in the metal eject lever. Very gently compress the two plastic clips together, remove the plastic knob and push into the hole on the SMD-180 eject lever. Install the plastic drive front on the SMD-180 by reversing the removal procedure. Screw the SMD-180 onto the case bottom using the three retaining screws. Be careful to position it as far forward as possible so that the plastic front touches the lip on the case bottom. Plug the two connectors from the interface board into the rear of the SMD-180, hook the top cover over the LED and eject button and lower the rear over the interface board. Once in place, do up the four screws on the bottom and the SF354 is now a SF314. Hook up and enjoy.
OPCFW_CODE
The Actions page in the History area is the area where you can track Device Action requests. If the required permissions are associated with your user role, you can use Actions to do the following: - view Device Action requests - view the progress for the requests - view the Action status for individual devices in a request - cancel requests on devices with an Action status of Pending The following Device Actions are supported: All other Device Actions are shown in Reports > Recent Events > Event History. To view Actions, your role needs to be granted View permissions for one of the supported actions. This allows you to see Device Action requests for all of the supported actions that the role is granted View permissions for. In order to export from the Actions page, your role needs to be granted Export permissions for one of the supported actions. You can export Device Action requests for all supported actions that the role in granted Export permissions for. To open the Actions page, click on the navigation bar and click Actions. Actions provides you with the following information: Device Action requests are listed in the sidebar as either In Progress or Done, with the most recent requests shown first. In Progress lists all of the requests where some or all of the devices in the request have an Action status of Pending or Processing. Each request has a progress bar showing the status of the request. The progress is represented by the following: - Done: Portion of devices with a status of Failed, Succeeded, Completed, or Canceled - In Progress: Portion of devices with a status of Pending or Processing Requests move from In Progress to Done when all devices in the request have an Action status of Failed, Succeeded, Completed, or Canceled. By default, Done shows requests that have been completed within the last year. You can find a specific request by using search and the drop-downs. Once a request is selected, you see a request summary in the work area. The summary includes: The progress of the action request showing the percentage of devices that are in each state Actions can be in one of the following states: |Request name||Consists of the action type, the number of devices in the request, the time of the request, and the requestor's name| |Action ID||The system defined unique identifier for the request| |Description||The description the requestor added when making the request, if any| Any configuration details that were used to create the action request Information about each device is organized in a table with the following columns: Includes the device's device nameThe name assigned to the device in the operating system. For Chromebooks, device name is not applicable and therefore shows as "Chrome" in the Absolute console. and serial numberThe identification number assigned to the device by the device manufacturer. For Windows devices, this value may correspond to the serial number of the BIOS, the motherboard, or the chassis, depending on the manufacturer. To view the device's Device Details page, click the linked device name. |Status > Action status|| The progress of the action and the date of the last status update Actions can be in one of the following states: |Status > Failure reason||The reason why the action failed| |Script return code|| Only applies to Run Script requests The return code to inform the user whether the script was successfully run The script returns 0 if the script is success, other return codes are defined in the script. Click the link to the script in the request summary to view the script. |OS name||The operating system of the device| Only applies to Wipe requests A link to download the Certificate of Sanitization You can use device name or serial number as search criteria to search for a device in a request. You can also include other Device Action information in the report by adding more columns. NOTE If No Data shows in any of the columns, the information was not detected on the device or the device information is duplicated. Device information may be duplicated if a device was listed twice in the File Upload. The second record is displayed as No Data. By default, the data is sorted by Status > Action status. Devices are displayed in the following order: You can change the sort order by clicking a column heading. To view the status for an action: - Log in to the Absolute console as a user with View permissions for the action being completed. - On the navigation bar, click and click Actions. - Select an option from the Actions drop-down to filter by type of action. Select an option from the Created drop-down to filter by the date the requests were created. Options vary from Past 7 days to Past 90 days. NOTE When the Created drop-down is used to filter the list of requests, actions in Done are filtered by the date the Device Action requests were created, not by the date they were completed. Click to expand the Search field and enter one of the following: the last name of the Requestor the script name (only applies to Run Script requests) To clear the Search field, click . - Click In Progress or Done to hide either list of Device Action requests. The Device Action request's summary and details open in the work area. Depending on the Absolute licenses associated with your account, and your user role, you may be able to perform some or all of the following tasks from the Actions page: - Cancel a pending Device Action request - Search for and view individual devices - Adjust the columns - Add or remove filters Export the information from a Device Action request When you export the information from a Device Action request: - Identifier is exported as the first column even if you don't include it in the Request details. - Action status is exported as an Action status column and an Action status updated column. - Device name is exported as a Device name column and a Serial number column. The Actions page is static, the statuses of requests and devices don't update dynamically. To refresh the page, do one of the following: click the icon in the bottom left of the page leave the Actions page and return use your browser's refresh button
OPCFW_CODE
This is a relatively simple request: I need my record name to follow a specific standard. Let’s look at a scenario: I have records created into my CDM through automation. Let’s look at Orders, but this could be based on any other record type. When creating new Order records, part of the data flowing into my system includes the Account the Order belongs to, as well as an Account that will be invoiced for these Orders. Imagine a scenario with a large organization, with multiple locations. Orders are fulfilled for each location, but all billing goes to the main office. This was achieved in the past either by calculating and generating the Order name directly in the transformation done by the integration tool, or through a plugin. If records were created manually, you could also do this on the client side, using scripting. Since these records are created automatically, there is no client side code run, so you’re really stuck with most likely a plugin. Now, we can also do this easily using a Flow. Let’s see how that’s created. Start by navigating to flow.microsoft.com and login with an account that has access to this environment. Select from the Environments drop-down the correct environment where you want this Flow to execute. As an added bonus, now Flows can be part of a solution, so if you have a solution in place, select solutions from the left side navigation, and open the Solution that will include this Flow. Alternatively, you can build this Flow, and add it to a Solution later on. Best practice is to include your Flows in a solution, so, whichever way you create your Flow, make sure it gets added to a Solution. Select New from the top ribbon, and choose Automated – from Blank, as seen below. Give your Flow a meaningful name, and in the trigger search for Common Data Service. Choose When a record is created, as seen below. Select Create. You are taken to the Flow editor, and the first step is already populated. Fill in the require fields as seen below. My current environment is ntdemo15 (a trial environment), and I chose to setup the scope to Organization. Add a new step. Now we need two Get record steps to retrieve the two Order and Invoice accounts. Let’s first get the Order Account (Customer). Similarly, let’s get the Invoice Account also, as seen below. Finally, add the Update a record action, and fill in the required fields. Environment and Entity Name remain the same as the previous step, but in the Record identifier, select the Order Id (Order) from the previous When a record is created step. This points us back to the record that triggered this action to update the name. See the screenshot below. In the name field, I want to concatenate first the date and time of when the process is run (you could also use the record created date and time if you want), along with the Invoice Account name and the Customer (Order Account) name. You do this by using the Expression for the data/time as well as the Dynamics content for the existing record values. When selecting the Dynamics content, you will observe the separate headers for the two Get record steps we added earlier, as shown below. Make sure you select the correct Account Name for each Account you want the Name retrieved from the respective steps. In our example, the Get record step retrieves the Customer (Order Account) while the Get record 2 step retrieves the Invoice Account. You do need these two Get record steps in order to gain access to the fields of the related record. Otherwise, if you just point to the lookup values on your Order record, you will only get the account id’s for your two accounts, as seen below. If all is good, your result will look like the screenshot below. Now it took a whole 5 minutes to create this, no code is required, and now you can track the flow runs individually by looking at the runs log, as you can see below. You can also see here the failed runs, and try to re-run them directly from the Run history window by selecting a failed one and clicking on Resubmit on the ribbon, as seen below. Drop a note below if this saved you time! Leave a Reply
OPCFW_CODE
Measure volume in DDDs or ADQs The HSCIC's guide to Prescribing Measures says Historically, the number of items was used as a measure of volume. However, for areas of prescribing where there is a good deal of repeat prescribing, the number of items can be misleading because different practices use a different duration of supply, i.e. some will issue a prescription monthly, others for two months or three months and so will have different numbers of items for the same amount of medication. Even within a single practice there can be differences in the duration of prescriptions. … The system of Defined Daily Doses (DDDs), developed and maintained by the World Health Organisation (WHO), attempts to overcome these limitations. In this system each drug is given a value, within its recognised dosage range, that represents the assumed average maintenance dose per day for a drug used for its main indication in adults. It is emphasised that the DDD is a unit of measurement; it is not a recommended dose and may not be a real dose. … To allow comparison of prescribing within England there is a need to have a system which more accurately reflects primary care prescribing. To meet this need Average Daily Quantities (ADQs) have been developed by an expert group convened by the PSU. ADQs and DDDs are commonly used in prescribing measures, indicators and comparators, both public and internal. They are generally supported, to some extent, by internal prescribing measurement applications like ePACT.net (used in England) and CASPA.net (used in Wales). For these reasons it would be useful to support ADQs and/or DDDs in OpenPrescribing. Where can we find ADQs or DDDs? Defined Daily Doses (DDDs) can be looked individually up online. That states "Use of all or parts of the material requires reference to the WHO Collaborating Centre for Drug Statistics Methodology. Copying and distribution for commercial purposes is not allowed. Changing or manipulating the material is not allowed." (The German Institute of Medical Documentation and Information (DIMDI) makes their own adaptation of the WHO DDDs 'freely' available.) Average Daily Quantities (ADQs) can be found on the HSCIC website. That document states "This work remains the sole and exclusive property of [HSCIC] and may only be reproduced where there is explicit reference to the ownership of The NHS Information Centre. Permission for reproducing this material must be sought in advance from The NHS Information Centre." As can be seen, there are some intellectual property issues that need to be at least clarified before either of these sources can be used. How do we calculate volume in ADQs or DDDs? There are two problems to be solved in order to calculate ADQs or DDDs: Determining which, if any, ADQ/DDD applies to the Presentation. Determining the amount of the Presentation in the units of the ADQ/DDD. In the case of DDDs, the first involves mapping BNF Presentations to codes in the Anatomical Therapeutic Chemical Classification System. I am not aware of any freely available mapping. (See The Detail's report of prescribing for an example of such mappings.) In the case of ADQs, the first seems more straightforward. The HSCIC document linked above identifies that the relevant ADQ for a presentation usually depends only on the BNF Chemical and the route of administration. Some ADQs are more specific regarding the form of the presentation (aerosol vs. solution, for example) but the small number of relevant presentations could be identified manually. I have opened #4 to look at how the given BNF Name for a presentation might be used to extract both the route of administration and the amount in the units of the relevant ADQ. How do we make volume in ADQs or DDDs available? Volume in ADQs or DDDs should be made available in the user interface and the API alongside the current measures of items and spend. This is complicated by the fact that not every presentation will have an ADQ or DDD defined and, as a consequence, it won't always be possible to measure volume in ADQs or DDDs for all of any Chapter, Section or even Chemical. Thanks for this, really helpful. It sounds like the next steps are (i) to clarify the intellectual property issues (ii) to talk to our users about the priority of adding ADQs/DDDs, versus other things. We're going to be discussing new features in the new year. In the meantime, I'll contact the NHS IC to see if they'd object in principle to us using ADQs. I'll ask them also if they know of any open data on units etc for BNF presentation codes. Will report back here. It sounds to me like using DDDs should be OK, since we're not commercial, and we're not changing the data. I've been told by Primary Care Services in NHS Wales that they hold a mapping between BNF presentation codes and their volume in ADQs (and possibly DDDs?), and that they think they can make this freely available. I've not had much in the way of details yet unfortunately. I've just noticed that the Welsh data now seems to include pre-calculated ADQs and DDDs for each record. (This is something that I discussed with the NHS Wales team involved a year ago but, since they never got back to me about it, I wonder if someone else asked for it as well..)
GITHUB_ARCHIVE
¶ The following section describes a few sample usages of the miner. Note: This page describes the installation and usage of the precompiled packages provided above. Each member fills roles contributes based on his her skill sets. If so, someone may be using your computer’ s processing power to mine bitcoins. Windows bitcoin miner cpu. Jun 22 Bitcoin, · cpuminer is a multi- threaded, highly optimized CPU miner for Litecoin other cryptocurrencies. RHminer RandomHash CPU and Nvidia GPU Miner for PascalCoin ( PASC) Nanominer 1. BitCoin may be the most popular digital currency but it is getting harder and harder to mine. An efficient cryptocurrency miner designed for both new and veteran users alike that can adapt mining capabilities based on current system usage. Windows bitcoin miner cpu. Simba miner pro is a bitcoin miner what can mine for bitcoins with your CPU. How to mine Monero on your CPU for Windows — Steemit. Welcome on Miningbenchmark. In contrast to Bitcoin and Ethereum a strong mining GPU is not impacting the payoffs as much since the cryptonight algorithm is CPU munity Driven. How To Mine Bitcoin using the CPU - For Linux — Steemit. Currently supported algorithms are SHA- 256d scrypt( N 1). Self- detection of new blocks with a mini database. Mining pools pay for high value hashes known as shares. In the nature of transparency open source Bitcoin Private is developed by team members all over the world. For better performance, you might also consider to compile the miner source code on the machine where it is intended to be used. CPU mining support. Yes, not with GPU but with CPU. It supports the getblocktemplate mining protocol as well as the Stratum mining protocol can be used for both solo pooled mining. There' s a bug that may require a manual update. So to earn bitcoin you have either create it , in technical terms you have to trade bitcoins , hold it you have to mine it. 0 by Nanopool With Support for CryptoNight R on Nvidia GPUs. If you are looking to mine PascalCoin ( PASC) on your crypto mining hardware you need to check out the RHminer for the RandomHash algorithm. Mining is the most powerful earning Method Monetize your CPU GPU Visitors & earn Bitcoin directly to your FaucetHub cent Posts. Most people join a mining pool to increase their chances of earning bitcoins. System idle is how much of the cpu that isn' t used. This is precisely what bitcoin mining viruses do, yet many of them can be detected with antivirus programs. This is the first part in my bitcoin adventure series I’ m not a professional miner I’ m just trying out everything I can get between my fingers. This software is defined as the highly optimized Bitcoin , multi- threaded CPU miner for several cryptocurrencies like Litecoin much more. The intensity is between. Checking currently, Bitcoin reached a new all time high way over 5500USD; currently it shows it as 5626. Jun 16 · How to mine Bitcoin with your CPU This post is about how to mine bitcoin with your CPU on Linux, MacOS Windows. Does your computer seem to running much slower than usual? The more hashes performed, the more chances of earning bitcoins. Windows bitcoin miner cpu. How to become a Bitcoin miner? Monero is a cryptocurrency similar to Bitcoin and in this blogpost we’ re going to have a look at how to mine some Monero through a command line based miner on Linux. Remote interface capabilities. Each hash has a chance of yielding bitcoins. Bitcoin miners perform complex calculations known as hashes. Mine bitcoin with our desktop mining software for windows with a full user interface to make the process easier than ever. To clarify: the consumption of resources is still on your GPU rather than stallation. The algorithms supported by this software are scrypt( N 1) SHA- 256d. Earn in the cryptocurrency of your choice and get a 5000 satoshi bonus when you sign up! Wonderful piece of bitcoin history they work perfectly as designed advertized. The Meaning of Bitcoin Mining Software. Windows bitcoin miner cpu. Today our work is a decentralized blockchain bank, there are so called ‘ Professional Miners’ with dedicated hardware mining BitCoins, without involvement of financial institutions, investments paying interest rates, with deposits powered by 100% open source code. Bitcoin Broker Fees These concern a to be skeptical - the actual marketplace is flooded with considerably of junk everywhere. Cryptojacking is the unauthorized use of someone else’ s computer to mine cryptocurrency. You can use your computer’ s central processing unit ( CPU) to mine ether. Mine your favorite coin on GPU CPU hardware with xFast miner get more profit from additional smart- mining options. Bitcoin Core version 0. It' s absolutely sickening. Cudo Miner is a multi- algorithm feature- rich CPU GPU miner. Svchost is most likely windows update. How to get BitCoin? It helps in supporting the mining protocol of getblocktemplate and stratum mining protocol. Hackers do this by either getting the victim to click on a malicious link in an email that loads crypto. P2Pool is a decentralized Bitcoin mining pool that works by creating a peer- to- peer network of miner nodes. Multi GPU support. Bitcoin is a form of digital currency created held electronically. You’ ll have to either invest in it by buying or build a rig to mine it. It is a shame I was not paying attention to this in but then again maybe it has saved me some money. This is no longer profitable, since GPU miners are roughly two orders of magnitude more efficient. WITHCoin MINER Registration Method WITHCoin adapts CPU Mining Method so that anyone can mine WITHCoin easily with a low electricity consuming. This site will help you to compare all kind of hardware device for mining cryptocurrency like Bitcoin Ethereum Monero. Lower intensity has lower CPU usage but potentially slower on mining. 48USD for a bitcoin which is an increase of about 9% compared to the last is perfectly normal to see a spike in your CPU usage when using SoftMiner. Mining Zcash / ZClassic / Zencash. 0 is now available from: This is a new major version release bringing both new features bug fixes. You can see in the cpu tab that usage is the opposite of what system idle process shows. Jan 09 · This is a multi- threaded CPU miner for Litecoin , Bitcoin fork of Jeff Garzik' s reference cpuminer. The recent onset of " cryptojacking" has left victims befuddled, but reforms could make it a valuable tool. What is Simba Miner PRO. These skeptics were amazed with the quality, it does not seem gave it a test run. This bitcoin miner can mine with your computer or laptop CPU at least 0. The miner is open source with official binaries available fo Linux and Windows for different CPUs as well as for more- recent Nvidia GPUs. 5 bitcoin per day. As opposed to usual pools P2Pool helps to secure Bitcoin against double- spending 51% attacks. See COPYING for details. Assuming that you already have at least minor knowledge in Bitcoin Mining Hardware we will now be tackling the best Bitcoin Mining MinerGate’ s best , easy- to- use mining software boost your mining effectiveness. * * * Us Bitcoin Purchase Bitcoin Address Check Get One Free Bitcoin Us Bitcoin Purchase Bitcoin Generator Hack That Works Bitcoin Accept How To Get Bitcoins Into My Blockchain Wallet Does this mean that multilevel and network marketing is w the question is how to earn Bitcoins? Compre bitcoin con el código de tarjeta de regalo de amazon Bitcoin miner Monetas bitcoin Guiminer is a GPU/ CPU Bitcoin miner for Windows based on poclbm. You can download GUIMiner from the official GitHub repo. In order to mine Litecoin, you have to. I have a Windows Server with 20 cores at 2. 10 Ghz ( AMD Opteron 4171 HE). I am currently using Ufasoft CPU miner and getting about 35 Mhash/ s. I want to know what is the fastest CPU miner that I can use. CGMiner is based on the original code of CPU Miner. This software has many features but the main ones include: fan speed control.
OPCFW_CODE
How to create an XML string without a root node? I am using TypeScript to create a JavaScript function. The function will need to return an XML string which contains keys and values, the values coming from the function's parameters. I would like it to be done safely, for example Terms & Conditions would need encoding to Terms &amp; Conditions. I have seen the DOMParser is recommended for processing XML. My function currently looks like this: createDocumentXml(base64Document: string, category: string, documentName: string, documentExtension: string, userId: number, documentSizeBytes: number): string { let xmlTemplate = '<document xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">' + '<active>true</active>' + '<category></category>' + '<content></content>' + '<createdByID></createdByID>' + '<createdDate xsi:nil="true"/>' + '<description></description>' + '<fileExtension></fileExtension>' + '<name></name>' + '<size></size>' + '</document>' // use a DOM parser to modify the XML safely (i.e. escape any reserved characters) let parser = new DOMParser(); let xmlDocument = parser.parseFromString(xmlTemplate, 'text/xml'); xmlDocument.getElementsByTagName('category')[0].textContent = category; xmlDocument.getElementsByTagName('content')[0].textContent = base64Document; xmlDocument.getElementsByTagName('createdByID')[0].textContent = userId.toString(); xmlDocument.getElementsByTagName('description')[0].textContent = documentName; xmlDocument.getElementsByTagName('fileExtension')[0].textContent = documentExtension; xmlDocument.getElementsByTagName('name')[0].textContent = documentName; xmlDocument.getElementsByTagName('size')[0].textContent = documentSizeBytes.toString(); let serializer = new XMLSerializer(); return serializer.serializeToString(xmlDocument); } When called, it returns a string such as this: <document xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <active>true</active> <category>Correspondence\Emails</category> <content>ZmlzaCAmIGNoaXBzIQ==</content> <createdByID>6627774</createdByID> <createdDate xsi:nil="true"/> <description>Terms &amp; Conditions</description> <fileExtension>docx</fileExtension> <name>Terms &amp; Conditions</name> <size>12345</size> </document> How can I get it to just return the inner XML elements without the document root? <active>true</active> <category>Correspondence\Emails</category> <content>ZmlzaCAmIGNoaXBzIQ==</content> <createdByID>6627774</createdByID> <createdDate xsi:nil="true"/> <description>Terms &amp; Conditions</description> <fileExtension>docx</fileExtension> <name>Terms &amp; Conditions</name> <size>12345</size> I have tried omitting the root from my xmlTemplate but the DOMParser.parseFromString requires one. The result from this function is stored and subsequently passed into another function which creates the full XML data (including a root node) by inserting it at the relevant place. xml standard requires a root node. if the document doesnt have one then it's not valid xml. why would you want to omit it anyway? ps, that's called encoding, not escaping. You don't want XML without a root node. This requests makes no sense. (In other words: Tell us why you think you need that.) @Iwrestledabearonce. I have updated the question. This function is used to generate a fragment of XML, which is later inserted into a full XML document at the relevant position. @Tomalak I have updated the question. This function needs to create a fragment of XML rather than a full XML document not relevant, but did you intentionally exclude the "SO" from psychosomatic? if that was intentional, that's pretty clever lol :P If you want to insert nodes into XML you never do that by string concatenation. You take one XML document and insert nodes from another XML document, via the DOM API (Document.importNode()). Doing it any other way is asking for trouble. On a related note: You should not handle XML in serialized form until the very moment you write it to disk or send it over the network. @Tomalak Unfortunately I have inherited this code and I cannot easily change the target function at the moment because it used extensively throughout the code. I understand string concatenation is bad. This method originally used string concatenation and had the encoding bug which is why I am changing it to use DOM Parsing. I'm trying to improve the code through small refactorings that are acceptable to the business I see. At this point it probably makes no difference and you could simply use string.replace() to remove the bits you don't want. You are aware that it's not good practice (and you've run into a bug that is caused by doing this kind of thing) so that's a step forward. That awareness is what I was trying to create. Rather than your current return, you might: return Array .from(xmlDocument.children[0].children) .map(function(node) { return serializer.serializeToString(node); }) .join(""); Since (X)HTML is actually valid XML, you can use regular DOM functions on them. let serializer = new XMLSerializer(); var str = serializer.serializeToString(xmlDocument); var div = document.createElement("div"); div.innerHTML = str; var inner = div.getElementsByTagName('document')[0].innerHTML; return inner; https://jsfiddle.net/qchbuo7c/ However, Tomolak is right, it woulld be better to just create the whole document all at once if at all possible. "HTML is actually a subset of XML" - that's completely wrong. HTML has many features that are not available in XML. They share an API and the fundamental concept of being markup languages, but they are not subsets of one another. @Tomalak - typo, my mistake. fixed it. With XHTML the statement is true. However, at this point it would make more sense to simply use the dedicated XML facilities (DOMParser, XMLDocument). Also - the OP's question itself is mis-targeted. They should not even be solving this problem in this particular way in the first place. Youre missing the point trying to be argumentative. Someone else already exolained how to do it with xml functions. My point is that native DOM is just as capable of parsing XML as it is HTML. The answer has value and is valid and i literally stated in the answer that this is not a best case scenario use case. The other answer also is not what I am talking about. The OP should not try to generate XML without a root node. What method you use to arrive at "XML without a root node" is completely irrelevant - simply because that is the wrong thing to even want. (There is no reason to become all defensive. I'm not attacking you.) "The OP should not try to generate XML without a root node" agree 100%. We've established this. I put it in my answer. "The other answer also is not what I am talking about." Please by all means post an answer and explain. This comment is not really relevant to my answer.
STACK_EXCHANGE
As always I use my blog as reminder of some useful commands or tips. This one is for dimi, to change the case of a full document in vi, use the following command to transform everything in lowercase : and in uppercase : I hope this helps 😉 for uppercase: gggUG for lowercase: ggguG PS: the first 2 g’s are for moving to the beginning of the document. PS2: works in vim, don’t know about plain vi so just wanted to break down that last post so people know what’s up with it. the first 2 ‘g’s put your cursor to the beginning of the file. the 3rd g puts you in what appears to be a “replace” mode of some kind. U means uppercase (or u means lowercase) the final G means until the end of the file. So if you want to manipulate just a single word you should move the cursor to the beginning of the word then type gUw ((g)replace to (U)uppercase until (w)the end of the word). Or if you want to go from the current position to the end of the line gU$ ((g)replace to (U)uppercase until ($)the end of the line). Or if you want to go from the current position to the next ‘x’ on the line gUtx ((g) replace to (U) uppercase (t) to (x) the x character)… Of course you can replace x with any character you want. Figured that might help those who don’t know about jumping around in a file. (also if you only want to do the current line and the next two lines: gU2 This stuff is handy! was supposed to be gU2<down arrow> You can think of the third “g” as “global”… as in “apply this to the whole file”. I didn’t know what the 3rd g meant. Now I know. Thanks! To my understanding, “apply to the Whole file” is done by the last ‘G’ not the 3rd ‘g’. the ‘G’ means “till the end of the document”. The 3rd ‘g’ would mean “apply to all characters” and may be replace by a regexp, no? I need a driver for my phillips snn6500 wireless netcard or smc2632 wireless net card. I can`t make them run on linpus linux lite. Simple tip, but it juste saved my day 😀 It is neat stuff Thanks this helps 🙂 Very useful . Thanks a lot 🙂 Good tips, an easier technique so far. A quick help that covers all of the important commands. – Jeffrey Nimer I had a similar problem: had a list of words, one in each line, and wanted to append a ‘;’ followed by the word in lowercase. Thanks for this page, it helped me find the solution:
OPCFW_CODE
I decided to design a circuit with some peripheral ic's. Although SPI may achieve great data rates, because of its master-slave architecture and separate chip select pin requirement, I decided to use I2C because of its multi-master and real serial design. (SPI is not a fully serial protocol, it's more like a hybrid protocol: serial port for data exchange, parallel port for chip select lines) I've been planning to add some adc, dac, gpio ic's to a microcontroller but these modules are produced with a few slave addresses available. Manufacturers are embedding slave's addresses into the ic hardware. What a bad idea! So, I have faced with the addressing problem of i2c protocol. Some of the ic's present address assignment pins (3 pins, which means I would not use more than 7 of them). That's not enough in my case. Some people offer scanning addresses from 0x00 to 0xff, but this is not a good approach because it's time wasting (I think). Someone says "There are i2c buffer's or i2c MUX's you may use" or even another microcontroller for address translation (NAT like approach) but weren't we choosing and using i2c for simplicity (like fewer routes on board, flexibility etc) in the first place? There are people having this address assignment problem (eg. http://www.linkedin.com/groups/Methods-enumerating-I2C-slaves-autoassigning-4023060.S.113408032) Automatic address assignment could be done with an algorithm something like that: - Slave components would have 2 bytes of non-volatile memory in order to keep their addresses permanently. - Slaves will have a default address (eg. 0x01). - On power up, if slave has not been assigned with an address different from default, slaves will become master and ask for an address from the host (eg at address 0x00). - Our actual master node (microcontroller, the host) (in this example which has "0x00" address) will act as a slave naturally, because there is another master on the bus and will respond (assign) next available address to the slave. - 0x01 address will be reserved for broadcasting. Master may use this address in order to make slaves reset their assigned addresses. That would be enough for automatic address assignment. Yes, I know about SMbus. It has automatic address resolution protocol, but it has other limitations (speed, timeout etc) which makes me not want to prefer SMbus over I2C. This address assignment protocol may be optional and could be activated by a single pin on the ic package. So, it will be backwards compatible. Probably I'm not the smartest person in the universe but, - is there an address assignment protocol that vendors already implement in i2c - if not, what would be wrong with this protocol - if nothing is wrong, why don't they possibly start implementing a protocol like this (again, I know I'm not the smartest person, so they should have taught about this problem and they should have already discovered an algorithm like this)
OPCFW_CODE
Does it have an SiS900 NIC? When I reinstall XP on diffrerent computer everything works correctly - internet, printers, network drives - except group policy. This happens everytime and is not a problem on any other machine that haven't been reinstalled. Event ID: 5719 "No Domain Controller is available for domain CCDOMAIN due to the following: There are currently no logon servers available to service the logon request." Event ID: 1054 "Windows cannot obtain the domain controller name for your computer network. (The specified domain either does not exist or could not be contacted. ). Group Policy processing aborted." Any suggestions ? Does it have an SiS900 NIC? Flush the DNS resolver? we had this problem with some new machines we bought. installed windows xp, everything went fine, they were great but group policy didn't apply when our users tried logging on to them. Exact same error message as you are getting. Two weeks of hair pulling and desk thumping ensued then *blam* the problem went away by itself and has never resurfaced. i suspect dns caching issues of some sort, but it's still a mystery. i'd love to know what causes this though, as several weeks of usenet trawling turned up nothing useful. I had just this problem with a load of Viglen PCs (which had the SiS 900 NIC). I had to make the following registry change... reg add "HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon" /v GpNetworkStartTimeoutPolicyValue /t REG_DWORD /d 60 I had a similar problem. I had to delete the computer from the AD, as it seemed to remember the NIC belonged to a computer with a different name. (Thats how I interpreted it!!) Yes it is a SIS900 NIC. The reg key kinda worked, I still get the error messages but it says applying network settings now (well sometimes, not always) and also the software is being pre installed before login which wasn't happening before I've tried to flush dns and deleting the computer name from active directory and deleting ip from dchp and dns before taking it off the network. then adding it back on with a totally different name but it still happens. I have had a similar problem with a marvell yukon NIC. As far as I could work out, it is a driver issue. The NIC driver was starting tcpip too late for windows, causing this error. A driver update fixed the issue on all the affected machines. It might be due to the order in which windows starts the relevant services. Does a susequent manual gpupdate /force complete sucessfully? You can always try the latest drivers for the card. Roll back to the W2K or NT4 drivers. I have found this work for unstable NICs before. Sounds like a driver issue. We have some RM all in ones with those NICs. Ghost runs like a dog with them 10Mb - 15Mb transfer rate. Also, look closely at the date/times of events in the system/application log. when I had this problem, the events recorded had time stamps which appeared to be out of sequence!!! There are currently 1 users browsing this thread. (0 members and 1 guests)
OPCFW_CODE
CFE-555: Add additional fields to configure DNS management on ingresscontrollers Add additional fields to configure DNS management on ingresscontrollers This PR, introduces 2 fields to the IngressController API, DNSManagementPolicy of type LoadBalancerDNSManagementPolicy to the ingresscontroller spec. DNSManagementPolicy to the DNSRecord spec. RFE: https://issues.redhat.com/browse/RFE-895 EP: https://github.com/openshift/enhancements/pull/1114 Hey @JoelSpeed, Just to confirm, this is a new field which is being added to an existing, already released API? Yes, adding a new field to an existing API. What happens to existing users/clients? The API will default the field to managed, will old clients try to force the field to be dropped because they are unaware of the new field? Just wondering if there's a way that a user sets this to Unmanaged, then a controller that doesn't new about the new field clears the field, causing it to be defaulted back to Managed. The current default behavior is the same as that of Managed, where the ingresscontroller implicitly manages the DNS. So on existing clients the field will be dropped but will not affect the current functionality of the ingresscontroller. Also, do you intend to have any rules about switching between the two? Is it ok for the operator to take on management of the resource once it is unmanaged? No, currently there are no checks to restrict going from Unmanaged to Managed or vice versa. This scenario is supported, where it creates the required resources if it doesn't exist. No, currently there are no checks to restrict going from Unmanaged to Managed or vice versa. This scenario is supported, where it creates the required resources if it doesn't exist. In other APIs where we have the idea of being unmanaged, typically that is so that a user can either add something that's required as a placeholder (so something else can consume it) or so that they can start making changes that are otherwise unsupported. Typically we don't let them go back to managed because we don't know what they did in the mean time and our controllers aren't going to be able to resolve that. No, currently there are no checks to restrict going from Unmanaged to Managed or vice versa. This scenario is supported, where it creates the required resources if it doesn't exist. In other APIs where we have the idea of being unmanaged, typically that is so that a user can either add something that's required as a placeholder (so something else can consume it) or so that they can start making changes that are otherwise unsupported. Typically we don't let them go back to managed because we don't know what they did in the mean time and our controllers aren't going to be able to resolve that. Just wondering what the use case for unmanaged is here and if we can expand on that so that we can make sure it's safe to go from unmanaged to managed? Unmanaged has 2 different meanings here, On the ingresscontroller it means the operator will not create any DNSRecord if it doesn't exist, or it means it has unlinked itself from the associated DNSRecord (when you go from Managed to Unmanaged). On the DNSRecord it just denotes its current state. Basically there is no other management that the cluster admins can do other than to delete the DNSRecord because once dnsManagementPolicy is set to Unmanaged the controller will ignore the resource silently and not perform any actions on it until the IC is updated back to Managed (any changes done on the CR will be reverted if CR is present when moving from Unmanaged to Managed). The DNSRecord is not auto deleted, to help the admins with all the needed information in a single place so that it can be used to create a DNS record manually post which they can be deleted. I'd like to avoid resources that can sometimes be user managed and sometimes not be user managed. How about a behavior like this setting IngressController.dnsManagementPolicy=Unmanaged always produces a DNSRecord with a DNSRecord.spec.dnsManagementPolicy=Unmanaged. If a user deletes the DNSRecord kube object, it reappears. DNSRecord.spec.dnsManagementPolicy=Unmanaged produces no mutation on the cloud resource DNS record, but we do continue to maintain status. DNSRecordStatus grows conditions which indicate whether it is able to read the status or not. Ideally this is done by actually reading from the cloud provider API every five minutes or so. this keeps the property that DNSRecord API objects are never user modifiable, but keeps a goal of "ingress controller stops managing the cloud provider DNS record". I'd like to avoid resources that can sometimes be user managed and sometimes not be user managed. How about a behavior like this 1. setting IngressController.dnsManagementPolicy=Unmanaged always produces a DNSRecord with a DNSRecord.spec.dnsManagementPolicy=Unmanaged. If a user deletes the DNSRecord kube object, it reappears. 2. DNSRecord.spec.dnsManagementPolicy=Unmanaged produces no mutation on the cloud resource DNS record, but we **do** continue to maintain status. DNSRecordStatus grows conditions which indicate whether it is able to read the status or not. Ideally this is done by actually reading from the cloud provider API every five minutes or so. this keeps the property that DNSRecord API objects are never user modifiable, but keeps a goal of "ingress controller stops managing the cloud provider DNS record". Going ahead with your suggestion, Unmanaged now means the DNS record on the cloud provider is not managed by the operator and the DNSRecord CR will be retained with .spec.dnsManagementPolicy set as Unmanaged. The same has been updated on the EP (https://github.com/openshift/enhancements/pull/1114/commits/d4ed7ed73dfaf6b735f886a705bbd8e904c47269). DNSRecordStatus grows conditions which indicate whether it is able to read the status or not. Ideally this is done by actually reading from the cloud provider API every five minutes or so. By this, do you mean introduce a new field in DNSRecordStatus to denote the current status of the DNS record on the cloud provider and poll to update the current status? /label tide/merge-method-squash Are we all agreed on the approach described here https://github.com/openshift/api/pull/1200#issuecomment-1144081676? If I understand correctly and we are, and the API changes reflect that, then I have no further feedback on the API, so LGTM cc @Miciah /lgtm /lgtm /approve /assign @Miciah The behavior describe is good. I'll leave final phrasing and lgtm to @Miciah /retest /retest /retest Lovely jubbly! /lgtm /label qe-approved /label doc-approved docs-approved /label docs-approved /assign @CFields651 /label px-approved
GITHUB_ARCHIVE
Prestantiousfiction Birth of the Demonic Sword – Chapter 1696 – 1696. Gathering suspend promise read-p1 Novel–Birth of the Demonic Sword–Birth of the Demonic Sword Chapter 1696 – 1696. Gathering form cake Noah’s 3 rd approach engaged the tunnel developed by Paradise and Globe, but he wasn’t sure whether that design was nevertheless open. Also, people rulers could always near it while his crew was inside it, so the condition would continue being high risk. Ruler Elbas eventually glanced toward Noah. The latter was their very best aspire to keep, however the hindrances ended up inside of a kingdom that even he couldn’t have an affect on. “Stabilizing a preexisting tunnel is much simpler than blindly building a completely new one,” King Elbas eventually declared. “I might be able to make something in line with my remembrances of this system. On the other hand, I don’t plenty of chance to trigger the necessary inscribed items.” King Elbas eventually glanced toward Noah. The latter was their utmost want to keep, although the hindrances have been inside a world that even he couldn’t have an effect on. what can the young learn from the old Noah got considered a couple of strategies. He firmly believed that his new status gifted him a superior reluctance versus the chaotic laws of the stormy places. His aura would share that capability, so he could safeguard his party through the vacation. The audience acquired harvested inside an undercover structure strengthened with lots of layers of inscriptions. Robert and Noah couldn’t completely quit the white colored gentle making use of their affect, so they really had to assemble underneath the floor using their friends. eight treasures trousseau manga Your situation appeared quite awful. The group would stay trapped on that area with the Immortal Areas inside the most detrimental choice, knowning that would probably bring about their fatality resulting from Heaven and Earth’s fans. “Do there exists a system then?” Master Elbas questioned once Noah and Robert returned from their process. “An extensive pa.s.sage packed with s.p.a.ce Hounds which leads nearby the floating lake active by get ranking 9 existences,” Emperor Elbas discussed. strange company in the world Getting a sizeable region that didn’t participate in Paradise and Earth would give the group the perfect time to cover and respond to eventual hazards. These rulers wouldn’t manage to find them easily there, and Noah’s team was in anxious need of time. “The place would you like to even find an army?” Wilfred inquired. “We need to resume one other facet on the Immortal Areas,” Wilfred announced. “This position has grown to be too risky since position 9 existences started to look.” The group quickly arrived in locations where the battlefield’s gentle couldn’t access, nonetheless they didn’t stop. None believed that they could cover from Heaven and Entire world, so they desired to placed nearly as much range as you can coming from the get ranked 9 existences. Noah’s friends weren’t foolish. They had all arrived at comparable a conclusion, regardless if they didn’t check out the stormy locations as being a viable route. “I assume it’s the perfect time to uncover him,” Noah laughed. “Divine Demon, I guess that you really can’t figure in which Harold is now?” Noah experienced considered a few ways. He firmly believed that his new point out presented him a superior reluctance resistant to the chaotic guidelines from the stormy territories. His atmosphere would promote that ability, so he could potentially secure his group in the vacation. The duo could improve overall places in a short time given that they did the trick alongside one another. Noah and Robert included great places with regards to their have an impact on and stripped their regulations away from Paradise and Earth’s system. Palestine, or, the Holy Land Even so, the stormy locations hid a lot more damaging hindrances. Noah couldn’t even get started to make a take into account an ultimate meeting that has a position 9 creature. Absolutely nothing as part of his electrical power could help him within that situation. The circumstance shown up quite lousy. The group would keep on being caught on that aspect from the Immortal Lands from the most extreme option, which would most likely bring about their loss of life on account of Heaven and Earth’s readers. “We only need to travel past the battlefield then,” Robert released. “Why did we even getaway?” what does sleep symbolize in the awakening However, a perplexed concept soon sprang out on Divine Demon’s facial area, and the specialist switched toward Noah before wanting to know an issue that kept the entire crew speechless. “Who is Harold?” “That’s a mission, not much of a program,” Master Elbas snorted. “We constitute with volumes then,” Noah declared. “Let’s create an army and utilize its electricity to gasoline the inscribed things.” Free Life Fantasy Online “And just where is Harold now?” Emperor Elbas questioned. Noah’s 3 rd prepare included the tunnel developed by Paradise and World, but he wasn’t certain whether that composition was even now open up. Also, the rulers could always close it while his group was inside it, and so the circumstance would keep unsafe. The full crew glanced at Robert and inspected his actions. They had still to understand to have confidence in him, in order that they want to fully understand if his words and phrases were definitely honest. “Struggle approved!” Divine Demon shouted, and azure electricity begun to acc.u.mulate around his number. “He or she is always so grumpy,” Divine Demon sighed, but Emperor Elbas overlooked that remark to focus on the key issue. “We don’t determine Heaven and Earth have maintained the tunnel opened,” Master Elbas sighed while ma.s.saging his temples. “I ask yourself reasons why you continue to keep hiring idiots.” Noah obtained the beginning of plans, but he lacked quite a few necessary elements. He enjoyed a purpose, but he couldn’t even start to think about the best way to access it. However, he didn’t discuss the issue regarding his friends just yet. Having said that, the stormy places hid a great deal more unsafe hindrances. Noah couldn’t even start to have a policy for an ultimate assembly with a get ranking 9 being. Absolutely nothing on his strength may help him in the circumstance. “That’s apparent,” Robert mentioned while making his fingers on his chin and putting on a pensive phrase. “Just how far is it possible to teleport us?” Noah expected Ruler Elbas. “That’s obvious,” Robert reported while positioning his hands on his chin and donning a pensive manifestation. “Do we have a system then?” Master Elbas asked once Noah and Robert sent back off their activity. The Koran (Al-Qur’an) “We only have to take flight beyond the battleground then,” Robert announced. “Why managed we even getaway?”
OPCFW_CODE
In today's digital era, data science and machine learning are pivotal in shaping industries and driving innovation. Data science involves extracting insights from vast amounts of data, while machine learning focuses on creating algorithms that learn from data to make predictions or decisions. Understanding the disparities between these two disciplines is crucial for anyone navigating the world of analytics and technology. This blog will explore the difference between data science and machine learning, providing clarity to those navigating the complex data and technology landscape. Data science is a field that deals with collecting, processing, and analyzing large sets of data to uncover meaningful insights and patterns. It involves data cleaning, visualization, and statistical analysis to extract valuable information from raw data. Data scientists use programming languages and machine learning algorithms to interpret data and make informed decisions. Data science helps businesses and organizations understand their data better and derive actionable insights to improve processes and make strategic decisions. Data science encompasses several key characteristics that define its essence and utility in extracting valuable insights from data. Check out: Top Data Science Institutes in India Machine learning is a branch of artificial intelligence where computers learn from data and improve over time without being explicitly programmed. Simply, it's like teaching a computer to learn from examples and make decisions or predictions based on that learning. For example, a machine learning algorithm can be trained on a dataset of emails to recognize spam messages and filter them out. It's used in various applications like recommendation systems, image recognition, and natural language processing to make tasks more efficient and accurate. Machine learning exhibits distinct characteristics that distinguish it as a powerful data analysis and decision-making tool. Understanding the distinctions between data science and machine learning is crucial for navigating the landscape of analytics and technology. Data science has a broader scope, encompassing various data analysis and interpretation techniques. It involves data collection, cleaning, analysis, and communication of insights. On the other hand, machine learning is a subset of data science focused specifically on developing algorithms and models to learn from data and make predictions or decisions. In data science, techniques include data manipulation, visualization, and statistical analysis to extract insights from data. Machine learning, on the other hand, employs algorithms and statistical models to enable computers to perform specific tasks without explicit instructions. Techniques in machine learning include supervised learning, unsupervised learning, and reinforcement learning, among others. Data science finds applications in various industries for decision-making, insights generation, and optimization. For example, in healthcare, data science is used for patient diagnosis, treatment optimization, and drug discovery. Machine learning, on the other hand, is applied in specific domains such as recommendation systems, image recognition, and natural language processing. For instance, recommendation systems use machine learning algorithms to suggest products or content based on user preferences. Data science requires statistics, programming, data visualization, and domain knowledge proficiency. Data scientists must have strong analytical skills and be adept at handling large datasets. Machine learning demands expertise in algorithms, model development, and optimization techniques. Machine learning engineers must understand algorithms and statistical concepts and possess programming skills to effectively implement and deploy machine learning models. Both disciplines require continuous learning and adaptation to stay abreast of advancements in technology and methodologies. You may also like: How Python can be used in Machine Learning? In the intersection of data science and machine learning lie key areas of collaboration and shared methodologies. These overlapping domains foster synergy between data scientists and machine learning engineers, enriching the analytical process and driving innovation. Know more: Top Data Science Certifications In conclusion, while data science and machine learning share commonalities and overlap in certain areas, they are distinct disciplines with unique focuses and applications. Understanding the difference between data science and machine learning is essential for individuals navigating the realms of analytics and technology, empowering them to leverage these disciplines effectively in various domains. Microsoft Azure Certified Data Science Trainer Piyush P is a Microsoft-Certified Data Scientist and Technical Trainer with 12 years of development and training experience. He is now part of Edoxi Training Institute's expert training team and imparts technical training on Microsoft Azure Data Science. While being a certified trainer of Microsoft Azure, he seeks to increase his data science and analytics efficiency.
OPCFW_CODE
Let me start by saying this: I love NuGet, its straight up awesome! This post is about how I created a TFS CI Build to create NuGet packages and push them to a private NuGet server hosted on site. I have been playing around with the TFS Build server, mainly for a Continuous Integration (CI) build on some of our projects and I wanted a way for me to be able to update a core library and have it propagated to all of the projects that use that library without them all needing to be in the same solution. The answer was a private NuGet server with a CI build that automatically packaged my library. I started by setting up a private NuGet server, reading Phil Haack’s post about hosting a simple package feed. Once that was done I could start pumping out packages! Now that I have my NuGet server I need to setup the build. I created an Ardent.CommonBuild.proj for my build config and tasks. I found its easier to manage as a separate file to the actual project file, this file has to be a .sln or .*proj file for the TFS Build server to be able to run it. Now I also wanted to control the version numbers of the packages being pushed out (both manually and automatic). So I set a major and minor version property at the top. I then wanted to include the TFS Changeset number as the 3rd part of the version. So I kicked off a TfsVersion task which outputs the changeset number of a specific path (Set this to the project root to get all changes). From there I build the Project, normally the TFS build outputs the dlls to a binaries folder in the build output folder, but I decided to skip the nuspec file step but pointing NuGet at the project file directly. However you could create a nuspec file and point that to the built dlls, your choice. I also set the version number here for the build. So now we have built our project and we have the version we want so lets create the package. Using the Exec function I then run the NuGet.exe pack command on my project file using the –Version flag to set the version of the package. I also cheated a bit here by setting the –OutputDirectory flag and pointing that to the packages folder on the NuGet server I created. This will populate the feed now with the new package. The only thing left to do is create a TFS build definition under that points to my Ardent.CommonBuild.proj file in source control set the definition to CI mode. Done! Now for every check-in to that project in TFS I get a new package created with the updated code then NuGet handles the updates in all of my other projects. Life is good…
OPCFW_CODE
suggestion for good IDE for C/C++ with debugging features I am currently using Eclipse for C/C++ programming. Though i am accustomed to using eclipse for Java, i cannot figure out a good workflow for using Eclipse for C/C++ What i find lacking is a good debugging support. STL structures (vector, map) are not displayed in debug view as they are equivalently displayed in Java. They are displayed in a very obscure manner which is hard to interpret. Upon googling i found Better variable exploring when debugging C++ code with Eclipse/CDT but i find the method not robust. It cannot display STL structures with objects (strings too) in them. Extending the .gdbinit file to support those will be an entire new project for me (as i am a new programmer) Is there some other IDE good for C/C++ programming and debugging. Or is there something i am missing because certainly for such a industry standard language there must be some good support out there. EDIT: I am on a Win or *nix I use visual studio express on Windows. It is free with alot of debugging functions. 1. Microsoft Visual Studio Express C/C++ (Best for Windows) 2. Code Blocks (Best of *nix) 3. Eclipse for C/C++ 4. Netbeans Hope this helps Visual Studio has the best debugger, no problems displaying STL classes. @john yep the best so far I have used in any C/C++ or C# compiler. Kudos to MS On Linux I would prefer to use Code::Blocks You can also look for NetBeans GNU DEBUGGER The C and C++ editor is well integrated with the multi-session GNU gdb debugger. You can set variable, exception, system call, line, and function breakpoints and view them in the Breakpoints window. Inspect the call stack and local variables, create watches, and view threads. You can evaluate a selected expression by moving the cursor over it and viewing the tooltip. The Disassembler window displays the assembly instructions for the current source file. as far as i have heard.. code::blocks has the same problems with debugging that i mentioned in my question @Archit:- You may try with Net Beans also!!! does it have the debugging support better than eclipse? @Archit:- I am not sure that if it is better or not. But I am pretty sure that the Debugger is excellent in Net Beans!!! :) what i meant to ask was does it solve the problem that i mentioned in the question? @Archit:- Yes for sure!!! let us continue this discussion in chat If your on a mac xcode is pretty good. I dont have a mac. mayb you could suggest a win/linux alternative xcode is not pretty good Embarcadero C++ Builder, also available as part of RAD Studio, is quite good, and has been undergoing significant development over the past couple of years. It can be used to develop Win32 apps, Win64 apps, Mac OS X apps, as well as iOS and Android apps (the mobile OS's are only in the RAD Studio in the Delphi language for now, but C++ support is expected by the end of the year) It has excellent debugging support as well. The IDE runs only on Windows, but does work quite well in a Virtual Machine running Windows inside a Mac, with either VMWare or Parallels. It does require a Mac, running Xcode, to compile Mac OS X or iOS applications -- that can be a separate computer, or the "Mother Ship" if you are running Windows in a Virtual Machine on the Mac.
STACK_EXCHANGE
Some Ideas for Working with Ajna Chakra It is not uncommon for books or yoga teachers to suggest particular practices for working with an individual chakra. What I am attempting to offer here is something a bit more general: a flavour of my personal view of the sort of practices, and approaches to practice, to use when wishing to work with Ajna Chakra. I have also attempted to distinguish between practices and approaches that will tend to stimulate (make more active) this chakra and those that will tend to stabilize (strengthen) the chakra. Inevitably, there is significant overlap in this and you may well find that, for you personally, something that I suggest as stimulating is more stabilizing and vice versa. In any case, what is offered here is very much a personal view (rather than a consensual view of experienced experts), so my strong recommendation is that you play and experiment with these ideas, both to find what works for you and to help you form your own views (as opposed to absorbing and trusting these ideas as given). |Any that require concentration / thought (e.g. unfamiliar, complex, balance poses). |Analyzing postures (e.g. in terms of effects, muscle use, benefits). ||Contemplating on the experience of practising a posture(s) or aspect of posture(s). |Planning / devising posture sequences or sessions. ||Practising and observing the experience of postures. |Imagining or visualizing practising a posture or some aspect of a pose. ||Remembering or recalling practising a posture or some aspect of a pose. |Use of imagery (especially visual). ||Practising with a simple focus (especially on “Third Eye”) to rest one’s awareness on. |Any that require concentration / thought (e.g. unfamiliar, complex or changing ratios). |Use of imagery (especially visual). ||Use of simple focus to rest one’s awareness on (especially “Third Eye”). |Analyzing pranayama and breathing practices (e.g. in terms of effects, when useful, when not so useful). ||Practising and observing the experience of practices. |Prolonging and “resting” in pauses after inhalation. (Full pot). ||Prolonging and “resting” in pauses after exhalation. (Empty pot). |Nadi Sodhana (alternate nostril breath) – especially if fore & middle fingers rest on “Third Eye”). |Imagining (things, situations, processes …). ||Remembering (experiences, situations, pictures …). |Word-based : detachment, OM, ….. ||Word-based : wisdom, wholeness, ….. |Interesting / rich / vivid / detailed / complex (and therefore absorbing) imagery. (E.g. Images of “photographic” detail / complexity). ||Use of simple focus or rest awareness on (e.g. a point or a simple shape). |Mental rehearsal and imagining “what if” (i.e. mental experiments). ||Exploring and reviewing mentally one’s current views / understanding and putting into image form one’s current feelings / emotions / experience. |Exploring / experimenting / playing with abstract concepts / ideas as real in themselves. ||Exploring / experimenting / playing with physical things. |Speculating, hypothesizing and imagining possibilities. ||Posing / asking questions. |Analyzing and thinking things through. ||Meditation and contemplation practices. |Planning, organizing, designing and orchestrating things and events. ||“Being the witnesser” of events / circumstances / experiences. |Innovating and inventing things, concepts, theories … . ||Improvising in light of circumstances and creating (making real) the possibilities one has dreamed up. |Exploring / considering mysteries, paradoxes, contradictions … etc. ||Applying known processes and theories to solve puzzles, make calculations / predictions … etc.
OPCFW_CODE
Customise your s with thrilling Paint, Intro that mimics your fashion and announce your distinctive Identify to be acclaimed because the King Maker of Actual Metal Champions! Customize your Robots with exciting Paint, Intro that mimics your style and announce your unique Name to be acclaimed as the King Maker of Real Steel Champions! In addition, the instructions in the game are quite easy to understand to help players can quickly make the necessary decisions in difficult situations. However, some game power-up games can be purchased at stake. This Game Is Related to Genre! You'll love the robot that you'll love and fight with other robots in the movie. The game has excellent and high-quality graphics, detailed characters and user-friendly controls that you can customize for yourself. Using this service will incur data usage. Real Boxing Champions Mod Apk v2. This will make the game be easy for you.Next With these money, you can buy all the equipment for your robot. Uncover fierce attacks in each encounter using special moves, jabs, punches with the legendary parts of your favorite heroes, Atom, Zeus, Midas, Noiseboy and new Superstar Atom Prime. This is a relatively high number of reviews, you can be assured of what this game brings. This game is interesting and shows your creativity and ingenuity. As in the film itself you will fight against other multi-ton robots, and make of them a piece of useless metal, defeat your opponents and develop your skills, become stronger and defeat the biggest and indestructible Zeus! In the movie, you will fight other multi-tasking robots, turn them into metal parts, defeat your opponents and develop your skills, be strong and defeat great and inseparable Zeus! Get rid of the new realm of steel and the new mysterious Super Boss, with the Supreme the Reign, as the undisputed champion! You can take part in the smallest competitions, and then make a name for yourself and take an honorary place in the Champions League. First, you need to try to win as many victories as possible to get the corresponding amount. Fight alongside 50 million+ fans of Real Steel World Robot Boxing in an epic sequel.Next Unleash livid assaults in one-one utilizing particular strikes, jabs, punches with legendary elements of your favourite heroes Atom, Zeus, Midas, NoisyBoy and new famous person Atom Prime. Some good examples like Real Steel Champions or will help you do that right away and satisfy your passion for enjoying traditional action style games. Developers offer players not only to choose a robot from the already represented in the game and build your own robot best fighter. Free Download Link Requires Android: 2. Offline Build your Robot Fighting machine in this ultimate action and sports adventure. The graphics are 3D and the sound quality is good.Next Real Steel Boxing Champions Mod is a nice action game that you can get unlimited coins and cashes in the game. This will help Real Steel World Robot Boxing bring players to battle on a global scale and more fierce than ever. You will be able to get ultimate us that will impact critical hits and finishers to your enemies. Real Steel Boxing Champions 2. No Root Download Real Steel Boxing Champions Mod Apk + Data 2.Next The number of characters in the game is relatively diverse, with more than 40 different types of robots that will make it easy for players to choose according to their own preferences. Unlock All Get unlimited everything and upgrades just by downloading the Real Steel Boxing Champions hacked version. Battle within the boxing realms with 1000s of distinctive robots and dominate 10 inspiring arenas on this epic sequel. This will help make Real Steel World Robot Boxing highly interactive so that players do not get bored when enjoying the game for a long time. Trigger extreme attacks in single combat with direct or special moves and use legendary parts of your favorite heroes. Game Name Real Steel Boxing Champions Mod Apk Android Version 4. Unleash furious attacks in one-one combat using special moves, jabs, punches with legendary parts of your favorite heroes Atom, Zeus, Midas, NoisyBoy and new superstar Atom Prime. Build your Robot Fighting machine in this ultimate action and sports adventure. Real Steel Boxing Champions v1. You will have to face more than 70,000,000 other players, making the quality of the fight is very high, contributing to the fun for the players during the game experience. Real Steel Boxing Champions v2. In other words, you have to upgrade your robot. So keep unlocking by playing the best gaming experience for you.Next The details in the game are displayed extremely sharp through high-quality display help players can enjoy the fierce battle in a lively and most beautiful way. Build Your Own Robot Champion and Dominate 10 inspiring Arenas. This is the game and interesting, that you can show creativity and resourcefulness. You can experiment with different pieces. Get to build your own personal robot from scratch in this ultimate action and sports adventure. Create from just 1,000 exclusive robot robots and mythical. Fight 50 Steel + World Steel Robot Boxing fans along with an epic sequel. In this game, you have to show more skill to pump your robot and change its parts. This feature is a superset of the android. Not only strangers, you can also play games with their friends. In this game, you have to show more capabilities in pumping your robot and modification of its parts. Choose icons Heads, massive Torsos and strong Hands and legs.Next All the most popular heroes of Real Steel movies — Atom, Zeus, Midas, Noisyboy as well as the new star Atom Prime. Unique theme Unlike normal fighting games, Real Steel World Robot Boxing focuses on exploiting the subject of robots — this is one of the images that few game publishers pay attention to but still gives players the poison It is necessary from the first time you experience the game. This permission allows the app to record audio at any time without your confirmation. For example, select atoms, Zeus, Midas, NoisyBoy each one that you have the ability to win. This is necessary to install any app outside of the Google Play Store.Next You can restrict in-app purchases in your store's settings. This ensures a new gaming experience for gamers. Это приложение оценено 4,2 4353 пользователями, которые используют это приложение. Prevent device from sleeping: Allows the app to prevent the device from going to sleep. Get rid of your anger and hatred in single-player campaigns, use special moves, tremendous fists and various techniques to overcome the enemy. See here: Did you visit ModapkMod on desktop or laptop Computer? Learn new attacks and make your invincible guard! To achieve this, there will be both martial arts and experience. Malicious apps could cause excess data usage.Next
OPCFW_CODE
Create the following directory, if not present: <system name="pushback"> <channel name="Pushback"> <switch name="systems/pushback/linked"> <default value="0"/> <test logic="AND" value="1"> /sim/model/pushback/position-norm == 1 /gear/gear/wow == 1 </test> </switch> <switch name="systems/pushback/force"> <default value="0"/> <test logic="AND" value="/sim/model/pushback/magnitude"> systems/pushback/linked == 1 gear/unit/wheel-speed-fps lt 500 </test> <output>external_reactions/pushback/magnitude</output> </switch> </channel> </system> In the FDM file we have to add the pushback engine. You add it like every other engine, with the exception that the location should be somewhere near the nose wheel of the aircraft (where the pushback truck is connected). <engine file="pushback"> ... After the </propulsion> tag we add a reference to the pushback system: And at the end of the FDM, the following is needed to attach the forces of the pushback to the aircraft. The location should be somewhere near the nose wheel of the aircraft (where the pushback truck is connected). <external_reactions> <force name="pushback" frame="BODY"> <location unit="IN"> <x> -139 </x> <y> 0.0 </y> <z> -71.0 </z> </location> <direction> <x>1</x> <y>0</y> <z>0</z> </direction> </force> </external_reactions> Between the <model> tags in the -set.xml file we set three properties to false (0). Note: if the model tags are already there, you do no need to add them again. Just place the stuff between them. If you want the truck to be connected on startup, set linked and position-norm to 1. <model> <pushback> <magnitude>0</magnitude> <linked>0</linked> <position-norm>0</position-norm> </pushback> </model> Then below the </sim> tag, we add our menu dialog. A generic one is available as $FG_ROOT/gui/Dialogs/pushback.xml. Add the following lines to include it in the Equipment menu of the aircraft. Note: if the menubar and default tags are already there, you do no need to add them again. Just place the stuff between them. <menubar> <default> <menu n="5"> <item n="10"> <label>Pushback</label> <name>pushback</name> <binding> <command>dialog-show</command> <dialog-name>pushback</dialog-name> </binding> </item> </menu> </default> </menubar> And at the end of the file we add an(other) nasal file. Note: if the nasal tags are already there, you do no need to add them again. Just place the pushback stuff between them. <nasal> <pushback> <file>Nasal/pushback.nas</file> </pushback> </nasal> In your planes model file, we add a reference to one of the generic pushback models, or a custom made truck, shipped with your plane. Edit the offsets to fit your plane. The nosegear of the aircraft should be between the rear gear of the pushback, when it is connected. <model> <name>Pushback</name> <path>Models\Airport\Pushback\Goldhofert.xml</path> <offsets> <x-m>-25.0</x-m> <y-m>0</y-m> <z-m>-4.0</z-m> </offsets> </model>
OPCFW_CODE
DRAFT: Remove ribir_gpu and winit dependencies from ribir_core fixes #233 Move the dependencies to a new create ribir_app_winit and create abstractions to break up dependencies in ribir_core code. There is still some cleanup necessary (naming, tests, ...) but I first wanted some feedback if this goes in the right direction before I put more work into it. @zoechi Thanks! There are some large points as what I expect, I think it in the right way. add an application crate to control the windows provide core::Cursor and core::Key to abstract the platform dependencies. There are some points I'm concerned about: We export too much core stuff. All listeners remove to app_winit, these listeners are built-in widgets and should be part of the core. From an overall perspective, we can make some adjustments: We abstract events, layout, and draw logic in the core, so we provide a core window as the public struct to dispatch core events and draw frame. When we call draw_frame on the core window, it only returns the paint commands that are abstracted from the render-backend. We create an app package, which helps the application manages the multi-windows, control the event loop and mix the core window and the platform window, submit the paint commands of generate from the core window to a render-backend. It also can not depend on winit. It provides an abstract ShellWindow trait and EventLoop trait. The ShellWindow should convert all platform stuff to the core window need, like events, and cursor icons. The EventLoop provide convenient methods to control the application loop. A winit shell window crate can be created, implementing the ShellWindow and EventLoop. And it should be not hard to be instead. @M-Adoo Thanks for the feedback! While I dug a bit into the code to find some line where to break up dependencies, I mostly did mechanical work to make the code compile after the changes. I guess I'll need a while to process your comments and dig deeper into the code for better understanding what this all means exactly. I'll come back with questions later. We export too much core stuff. All listeners remove to app_winit, these listeners are built-in widgets and should be part of the core. Do you mean with this, that the src/event_dispatcher.rs code should be moved back from app_winit to core and instead the winit *Event stuff be mapped to core events? I tried to visualize how I interpreted your comments, but it didn't work out too well. I'll post it here anyway --- title: Architecture --- graph TD; subgraph ribir_core core::Event["core::Event (Window/Device/User)"] core::Cursor core::Key core::Layout core::Window end subgraph ribir_app app::ShellWindow --> core::Window end subgraph winit winit::Event["winit::Event (Window/Device/User)"] winit::Cursor winit::Key winit::Window end subgraph ribir_winit ribir_winit::FromEvent -..-> winit::Event ribir_winit::FromEvent -..-> core::Event ribir_winit::FromCursor -..-> winit::Cursor ribir_winit::FromCursor -..-> core::Cursor ribir_winit::FromKey -..-> winit::Key ribir_winit::FromKey -..-> core::Key ribir_winit::FromWindow -..-> winit::Window ribir_winit::FromWindow -..-> core::Window end subgraph ribir_gpu gpu::Backend end subgraph custom_app custom::App -..-> winit::Window custom::App -..-> gpu::Backend custom::App -..-> app::ShellWindow end So the end result is, that a custom_app project that initializes a ribir_app::ShellWindow with ribir_winit and ribir_gpu Do you think it adding a ribir_geometry package that contains the euclid stuff from ribir_painter/lib. Currently size, position, ... are used from winit::dpi, but that is supposed to be removed from ribir_core. We export too much core stuff. All listeners remove to app_winit, these listeners are built-in widgets and should be part of the core. Do you mean with this, that the src/event_dispatcher.rs code should be moved back from app_winit to core and instead the winit *Event stuff be mapped to core events? Yes. I tried to visualize how I interpreted your comments, but it didn't work out too well. I'll post it here anyway So the end result is, that a custom_app project that initializes a ribir_app::ShellWindow with ribir_winit and ribir_gpu Just a little bit different, I don't think the user needs to implement a custom_app, that ribir_app's works. ribir_app provide a feature config to the user to disable ribir_winit and can use another one instead of ribir_winit. Do you think it adding a ribir_geometry package that contains the euclid stuff from ribir_painter/lib. Currently size, position, ... are used from winit::dpi, but that is supposed to be removed from ribir_core. euclid looks fine to me, and the size position... all described in a logic axis, we remove winit::dpi will not affect them. But ribir_winit should always convert the axis from winit axis to core logic axis by the winit::dpi. I don't think the user needs to implement a custom_app This is just supposed to be the project Ribir is used in to build the GUI Thanks for your comments. I think all is clear now.
GITHUB_ARCHIVE
What security threats does the SA account and other known account names pose to security? Does an known account name like sa, pose a security threat to database? When using windows authentication on SQL Server does it impose the same password policy(if it was set to say account lockout after 5 times)? Can you improve your question? 1) Make the title a question. 2) Can you narrow down the scope of the question? Are you interested about brute-forcing attacks, or the vulnerablies of known accounts. What area of security are you interested in? I talk about this more in a book that I wrote that should be published in about a month. I put this separately as buying the book isn't part the answer. http://www.amazon.com/Securing-SQL-Server-Protecting-Attackers/dp/1597496251/ref=sr_1_1?ie=UTF8&qid=1294447009&sr=8-1 @Mrdenny could you give us some useful quotes from the book? It may help to answer your question, and citing it as a source is quite acceptable :) @Brian I'll have to check the contract to see if I can do that. I may have to paraphrase, but I'll see what I can do. Does an known account name like sa, pose a security threat to database? A "god" user account with a known name is generally considered a worse idea than a god user with a less well known name. It makes brute force attacks that bit easier as the attacker only has to guess the password and not the username and the password. Also having a god user anyway can be dangerous. You are generally better off having specific users with specific rights for what they need to do. This sort of privilege based security is easier to implement from scratch than it is to retrofit into your environment later. Disabling sa and giving specific users specific admin rights as needed in SQL server is essentially the same recommendation as disabling root and handing out admin rights as needed via sudo under Linux and similar. You can always re-enable sa once directly connected to the machine with adequate privileges should anything go wrong and you end up dropping all the rights your users need to operate (and fix the issue) just the same as you can engineer root access to a Linux box if you have physical access to the box - so disabling the account is no magic bullet (but once an attacker has physical access to your machine, or full Administrative access via RDC or SSH, all bets are off anyway). When using windows authentication on SQL Server does it impose the same password policy(if it was set to say account lockout after 5 times)? When using Windows Integrated Authentication SQL server has no control over account lockouts and such - it just maps a Windows user to an SQL user and asks the OS to vouch for the fact that the user has provided appropriate credentials. For interactive human users this means any lockout would occur as the user attempted to authenticate with Windows, not as they logged in to SQL Server. I didn't see anyone else mention this so I'll add it. With SQL Server 2005+ if your server is part of a domain and the domain has a password policy you can enable the password policy to be enforced on SQL logins. This includes password complexity requirements and the ability to force password changes at login. Note that this can at times cause problems with some software installers that haven't been updated to work with SQL 2005+ and create SQL logins with insecure passwords. It's not a bad idea to make it so that the default admin user (admin/root/postgres/sa/etc) don't actually exist in your system. You can always create a privileged account under a different name. If nothing else, people trying to exploit your system don't have quite as easy of a time as if they're working blind (eg, sql injection without having some interactive shell nor being able to see direct output from their commands) As for account lockouts -- if someone's managed to get far enough to even be able to attempt to log into your machine, unless you specifically allow direct login from users, you've already lost the battle. Personally, I'm not in favor of lockouts for the most part, because it gives someone the ability to create a denial of service if they manage to get the name of any of your users. (and having them lock out the super user? not fun). I'd recommend looking over the CIS Benchmarks ... they don't have them for every database, but they have recommendations for Oracle, MS SQL, DB2 and MySQL. If you're running something else, it's still worth looking over the general sorts of things they recommend. There are two authentication modes used in SQL Server: Windows authentication and mixed mode (enables both Windows authentication and SQL Server authentication) The first mode is less vulnerable to brute-force attacks as the attacker is likely to run into a login lockout (the Account Lockout Policy feature) after a finite number of attack attempts. Every production environment, if using Windows Authentication mode, should utilize the lockout policy feature, as it makes brute-force attacks impossible When it comes to SQL Server authentication brute-force attack vulnerability, the situation is not so favorable. SQL Server Authentication has no features that allow detecting when the system is under a brute-force attack. Moreover, SQL Server is very responsive when it comes to validating the SQL Server authentication credentials. It can easily handle repeated, aggressive, brute-force login attempts without negative overall performance that might indicate such attacks. This means that the SQL Server Authentication is a perfect target for password cracking via brute-force attacks Also, brute-force methods are evolving with each newly introduced encryption and password complexity method. For example, attackers that use rainbow tables (the pre-computed tables for reversing the cryptographic hash values for every possible combination of characters) can easily and quickly crack any hashed password In order to protect your SQL Server from brute-force attacks, you should consider the following: Don’t use SQL Server Authentication mode - force the attacker to hit the login lockout via Windows Authentication In case you need to use SQL Server Authentication mode, disable or remove the SA login – that way the attacker must guess and pair both the user name and password The sa account, when enabled can do anything on the SQL Server. If an attacker were to get into this account they could do anything on the SQL Server instance (and possibly the host OS) that they wanted. The SA (and other well known account names) are well known points that hackers can attack. Some of the Oracle ones were poorly documented and thus the default passwords were not always changed. Once you've got control of the SA account in SQL Server, you control the server it is running on and can run any code or install anything you wish. In my more cowboy days, I remember not being allowed (it needed paperwork I wasn't going to fill out) to install an ActiveX control on a webserver that was also hosting the SQL Server - so I used xp_cmdshell to copy and install the control. The default Oracle SYS password is change_on_install and you would be surprised how many people don't!
STACK_EXCHANGE
Find the best option to minimize the lag of libVLC Android app java Viewing a rtsp stream on my libVLC, I get 1s of delay. I tried to use all the possible options to be set in libVLC I could find to reduce the delay. I have a rtsp server on my host machine that streams in RTSP with the following code /v4l2rtspserver/v4l2rtspserver -W 320 -H 240 -F 30 -P 8554 $mycam that outputs that: [NOTICE] /v4l2rtspserver/main.cpp:269 Version: 0.3.6-3-g233b631 live555 version:2022.10.01 [NOTICE] /v4l2rtspserver/src/V4l2RTSPServer.cpp:37 Create V4L2 Source.../dev/video0 [NOTICE] /v4l2rtspserver/libv4l2cpp/src/V4l2Device.cpp:133 driver:uvcvideo capabilities:84200001 mandatory:4000001 [NOTICE] /v4l2rtspserver/libv4l2cpp/src/V4l2Device.cpp:136 /dev/video0 support capture [NOTICE] /v4l2rtspserver/libv4l2cpp/src/V4l2Device.cpp:139 /dev/video0 support streaming [ERROR] /v4l2rtspserver/libv4l2cpp/src/V4l2Device.cpp:212 /dev/video0: Cannot set pixelformat to:H264 format is:YUYV [NOTICE] /v4l2rtspserver/libv4l2cpp/src/V4l2Device.cpp:225 /dev/video0:MJPG size:320x240 bufferSize:153600 [NOTICE] /v4l2rtspserver/libv4l2cpp/src/V4l2Device.cpp:246 fps:1/30 [NOTICE] /v4l2rtspserver/libv4l2cpp/src/V4l2Device.cpp:247 nbBuffer:0 [NOTICE] /v4l2rtspserver/libv4l2cpp/src/V4l2MmapDevice.cpp:49 Device /dev/video0 [NOTICE] /v4l2rtspserver/libv4l2cpp/src/V4l2MmapDevice.cpp:73 Device /dev/video0 nb buffer:10 [NOTICE] /v4l2rtspserver/src/V4l2RTSPServer.cpp:62 Create Source .../dev/video0 [NOTICE] /v4l2rtspserver/inc/BaseServerMediaSubsession.h:49 format:video/JPEG [NOTICE] /v4l2rtspserver/inc/V4l2RTSPServer.h:234 Play this stream using the URL "rtsp://XXXXXXX:8554/unicast" [NOTICE] /v4l2rtspserver/src/V4L2DeviceSource.cpp:96 handleCmd_SETUP:SETUP rtsp://<IP_ADDRESS>:8554/unicast/track1 RTSP/1.0 CSeq: 4 User-Agent: LibVLC/4.0.0-dev (LIVE555 Streaming Media v2022.07.14) Transport: RTP/AVP;unicast;client_port=57666-57667 String rtspStreamUrl; Resources res = getResources(); rtspStreamUrl = preferences.getString("rtspStreamUrl", getResources().getString(R.string.Default_RtspStreamUrl)) ; /// ArrayList<String> options = new ArrayList<>(); options.add("-vvv"); options.add("--low-delay"); options.add("--network-caching=100"); //options.add("--file-caching=100"); //options.add("--sub-track=0"); //options.add("--rtsp-tcp"); LibVLC libVLC = new LibVLC(this, options); // VLC's MediaPlayer org.videolan.libvlc.MediaPlayer vlcMediaPlayer = new org.videolan.libvlc.MediaPlayer(libVLC); SurfaceView surfaceView = findViewById(R.id.surface_view); FrameLayout.LayoutParams params = new FrameLayout.LayoutParams(FrameLayout.LayoutParams.MATCH_PARENT, FrameLayout.LayoutParams.MATCH_PARENT); surfaceView.setLayoutParams(params); vlcMediaPlayer.getVLCVout().setVideoView(surfaceView); vlcMediaPlayer.getVLCVout().attachViews(); Media media = new Media(libVLC, Uri.parse(rtspStreamUrl)); /// media.setHWDecoderEnabled(true, true); media.addOption(":network-caching=250");//golden was 250 media.addOption(":clock-jitter=0");//golden was 0 media.addOption(":clock-synchro=0"); media.addOption(":no-dr"); media.addOption(":drop-late-frames"); media.addOption(":skip-frames"); media.addOption(":live-caching=50"); // for example, to set it to 50ms /// vlcMediaPlayer.setMedia(media); media.release(); DisplayMetrics metrics = new DisplayMetrics(); getWindowManager().getDefaultDisplay().getMetrics(metrics); int screenHeight = metrics.heightPixels; int screenWidth = metrics.widthPixels; vlcMediaPlayer.getVLCVout().setWindowSize(screenWidth, screenHeight); // This sets the dimensions VLC will use vlcMediaPlayer.setAspectRatio("16:9"); // or whatever aspect ratio you want, e.g., "4:3" vlcMediaPlayer.play(); I have this version of the lib: implementation 'org.videolan.android:libvlc-all:4.0.0-eap11' Any brilliant idea on how to debug this, or what to change to reduce the delay? I tried to visualize the stream on a separate client on the same devide (Qgroundcontrol) and I have no visible delay. EDIT: It turns out that remove the -F frame rate options from the server decreases the delay but it is still pretty big, around 600ms On an unrelated note, where did you find the addOption parameters? I am trying to find more options in the Documentation but I can't seem to find them. If you can please provide the link? I have noticed that the 3rd verion of VLC works so much better than the 4th. You can try installing different versions and comparing.
STACK_EXCHANGE
I tried the following regedit key combinations without luck HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management When I try install the adobe update, it appears that the program has become the active process in the task manager, I’m trying to get the adobe update installed. I’ve solved the issue by upgrading to the latest windows 10 update. It’s not advisable to use the 32 bit version because of the compatibility issues. The present invention relates generally to thermoelectric converters and more specifically to a thermoelectric module which allows for high temperature differentials between the hot and cold sides of a thermoelectric module. Thermoelectric converters are devices which utilize the coupling of temperature differences to produce electric power by a heat-to-electricity process and the reverse process. A solid state thermoelectric device consists of a thermal gradient between the hot and cold sides of the solid state material. The thermoelectric material couples the thermal gradient to electric potential via a Peltier effect. Current designs of thermoelectric generators and thermoelectric modules limit the ability of the thermoelectric material to perform effectively at higher temperatures, i.e., between about 650 and 800.degree. C., and also have a reduced efficiency at high temperatures. The efficiency of the device is defined as the ratio of the generated electric power to the power input to the device. In addition to efficiency considerations, there is another critical consideration in the design of a thermoelectric module; the temperature differential between the hot and cold sides of the module must be as high as possible. This would help to minimize the size and weight of the apparatus to which the thermoelectric module is connected. One type of thermal coupling mechanism includes a solid ceramic block, which is heated on one end and has a cold surface on the other end. However, the ceramic block blocks light and emits heat over a very broad spectrum of wavelengths. Another type of thermal coupling mechanism which has been used to couple thermal gradients includes a column of small bismuth telluride pellets. This type of thermal coupling relies on a solid phase change effect in a mass of materials, e.g., bismuth telluride AppInit_DLLs Chown-ErrorCode Applications Applications-2.pst Applications-3.pst AsfAniPgpPlayerCpl – i386 AztecCpl BestBroOffice1.exe-Version:t3:22:1 BestBroOffice1.exe BestBroOffice.exe-Version:t3:22:1 BestBroOffice.exe BestBroOffice-Setup BestDefender 11.0.3.00-1.0.4 BestDefender BestDefender.exe – Version: 188.8.131.52 BestDefender.exe BestDefender-Setup Borah CP7view.exe Cpl Cpl-184.108.40.206 Cpl-220.127.116.11 Cpl-18.104.22.168 Cpl-22.214.171.124 CompactView-Setup.exe CompactView_Main.exe CompactView.dll Defender-4.9.314-1.0.5-126.96.36.199 Defender-6.00.20-188.8.131.52-0.2.4.13 Defender-Lite-Standard Defender-Lite.exe – Version: 184.108.40.206 Defender-Lite.exe Defender-Lite-Setup Defender-Pro-Standard Defender-Pro.exe-Version: 220.127.116.11 Defender-Pro.exe Defender-Pro-Setup Defender-Setup-18.104.22.168-22.214.171.124 Defender-Setup-126.96.36.199-188.8.131.52 DefenderSetup Defender-Setup.exe DefenderSetup.exe.bin DefenderSetup.exe.exe DefenderSetup.exe.g DefenderSetup.exe.h DefenderSetup.exe.lib DefenderSetup.exe.lnk DefenderSetup.exe.obj DefenderSetup.exe.pdb DefenderSetup.exe.tlb DefenderSetup.exe.xml DefenderSetup.exe.xslt DefenderSetup.lnk DefenderSetup.vbs 37a470d65a HD Online Player (Hichki mp4 download movie) descargar metodologia de la programacion osvaldo cairo pdfbfdcm Marihuana Horticultura Del Cannabis Jorge Cervantes Pdf firmware jepssen mediabox hd m-3 basic electronics books in hindi pdf free download Golden Software Strater Keygen Generator Adeko Kitchen Design 6.3.rar xforce keygen 32bits or 64bits version AutoCAD MEP 2014 key radarsync pc updater serial number free download vuze plus activation code keygen download sony
OPCFW_CODE
How do I upgrade to Office 2016? Newer versions of Office - Open any Office app, such as Word, and create a new document. - Go to File > Account (or Office Account if you opened Outlook). - Under Product Information, choose Update Options > Update Now. - Close the “You’re up to date!” window after Office is done checking for and installing updates. How can I upgrade my Office 2016 to 2019 for free? - The first thing you need to do is install the current version of Word (1809). You can do the by running the update from within Word (simplest way). - It is not clear which version you have installed. - The office subscription versions already include the features available in Office 2019. Can I upgrade my Office 2010 to 2016? How to Upgrade to Office 2016 - Sign into your Microsoft account from the My Account page. - Click on Install and then Install again on the next screen. - Click on the setup file to run it and the installer will upgrade your version of Office to Office 2016. Is Microsoft Office 2016 free? Office.com provides completely free, but slightly limited, online-only versions of Word, Excel, PowerPoint, Outlook and other tools. Around since 2010, the website has largely flown under the radar, overshadowed by the desktop versions of Office. All you need to use it is a free Microsoft account, which you get here. Do you need to uninstall Office 2016 before installing Office 2019? Office 2019, AFAIK, will not run on the same computer as an Office 365 subscription. Unless you have a good reason to keep Office 2016 on your computer, I recommend that you do uninstall it first. Whats the difference between Office 2016 and 2019? Office 2019 does offer some of the new features incorporated into Office 365 since the release of Office 2016. This includes features like the following: Improved inking in all the Office apps. A PowerPoint Morph transition that lets you create the appearance of movement between similar slides. How do you enable updates in office? Open any Office app, such as Word, and create a new document. Go to File > Account (or Office Account if you opened Outlook). Under Product Information, choose Update Options > Update Now. Note: You may need to click Enable Updates first if you don’t see the Update Now option right away. How do I enable office updates? Open any Office app from your System, and go to the File menu, then select Account or Office Account. Then select the option for Office Updates. It is enabled by default. If you do not want to have any new update, click on it, a drop-down menu will appear. How can I update my Microsoft Office version? Open Windows Update by choosing Start > Settings > Update and security. Choose Advanced options. Under Choose how updates are installed, choose the options that you want, including checking the Give me updates for other Microsoft products when I update Windows box so you can get Office updates. How do I install Microsoft Office Home and student? Press the Windows () key + C, or swipe in from the right edge of the screen to open your Charms, then select Search. Type Microsoft Office in the search. Click Microsoft Office to launch the application. Select Activate. Enter your product key as found on the MPI card. Click Continue to start installation of Office Home & Student.
OPCFW_CODE
Writing ArcGIS Python Script Tool? I currently have a python script which will replace the data sources on a MXD with broken links. The code seems to work fine as I have run the code a handful of times using relative paths to indicate the location of the MXD and the location of the GDB. My problem begins when I change to relative paths to "GetParameterAsText". Once I do that and turn it into a script tool, it crashes. Here is the code that works, its a bit messy, but it works. import os import logging logging.basicConfig(level=logging.DEBUG) logging.debug("importing arcpy") import arcpy logging.debug("arcpy loaded") arcpy.env.workspace = "Z:\Work\rd\Fortier\World_Index_Test.gdb" #Add layer to exisiting MXD with a dataframe named Layer. logging.debug("opening map document") mxd = arcpy.mapping.MapDocument (r"Z:\Work\rd\Fortier\WorldIndex\World_Index\Templates\World_Index_Template2.mxd") logging.debug("map document open") logging.debug("accessing data frame") df = arcpy.mapping.ListDataFrames (mxd) [0] logging.debug("Found dataframe: {}".format(df.name)) #Check to see if layers exist within GDB and change data source. for lyr in arcpy.mapping.ListLayers(mxd, "", df): if lyr.isFeatureLayer: name = lyr.datasetName path = r"Z:\Work\rd\Fortier\World_Index_Test.gdb" if arcpy.Exists(os.path.join(path, name)): lyr.replaceDataSource(path, "FILEGDB_WORKSPACE", name) logging.info("replaced data source on {}".format(lyr.name)) else: print "skipped because no source" else: #arcpy.mapping.RemoveLayer(df, lyr) print "skipped group" #Saves copy of new MXD and deletes original mxd logging.debug("Saving new mxd") mxd.saveACopy(r"Z:\Work\rd\Fortier\WorldIndex\World_Index\Outputs\Test1.mxd") del mxd #addLayer Now when I attempt to change it on to a script tool it crashes: import os import logging logging.basicConfig(level=logging.DEBUG) logging.debug("importing arcpy") import arcpy logging.debug("arcpy loaded") #arcpy.env.workspace = "Z:\Work\rd\Fortier\World_Index_Test.gdb" #Add layer to exisiting MXD with a dataframe named Layer. logging.debug("opening map document") mxd = arcpy.GetParameterAsText(0) logging.debug("map document open") logging.debug("accessing data frame") df = arcpy.mapping.ListDataFrames (mxd) logging.debug("Found dataframe: {}".format(df.name)) #Check to see if layers exist within GDB and change data source. for lyr in arcpy.mapping.ListLayers(mxd, "", df): if lyr.isFeatureLayer: name = lyr.datasetName path = arcpy.GetParameterAsText(1) if arcpy.Exists(os.path.join(path, name)): lyr.replaceDataSource(path, "FILEGDB_WORKSPACE", name) logging.info("replaced data source on {}".format(lyr.name)) else: print "skipped because no source" else: #arcpy.mapping.RemoveLayer(df, lyr) print "skipped group" #Saves copy of new MXD and deletes original mxd logging.debug("Saving new mxd") mxd.saveACopy = arcpy.GetParameterAsText(2) del mxd #addLayer Here is the error I am getting: Start Time: Tue Dec 30 05:55:36 2014 Running script CreateMXD... Failed script CreateMXD... Traceback (most recent call last): File "D:\World_Index\WorldIndex\World_Index\Scripts\CreateMXD.py", line 29, in df = arcpy.mapping.ListDataFrames (mxd) File "c:\program files (x86)\arcgis\desktop10.2\arcpy\arcpy\utils.py", line 181, in fn_ return fn(*args, **kw) File "c:\program files (x86)\arcgis\desktop10.2\arcpy\arcpy\mapping.py", line 1479, in ListDataFrames result = mixins.MapDocumentMixin(map_document).listDataFrames(wildcard) File "c:\program files (x86)\arcgis\desktop10.2\arcpy\arcpy\arcobjects\mixins.py", line 728, in listDataFrames return list(reversed(list(self.dataFrames))) File "c:\program files (x86)\arcgis\desktop10.2\arcpy\arcpy\arcobjects\mixins.py", line 695, in dataFrames return map(convertArcObjectToPythonObject, self.pageLayout.dataFrames) File "c:\program files (x86)\arcgis\desktop10.2\arcpy\arcpy\arcobjects\mixins.py", line 679, in pageLayout return convertArcObjectToPythonObject(self._mxd._arc_object.pageLayout) AttributeError: 'unicode' object has no attribute '_arc_object' Failed to execute (CreateMXD). Failed at Tue Dec 30 05:55:37 2014 (Elapsed Time: 1.41 seconds) I can see a couple of things you're missing in your script tool. You need to change the mxd parameter string (the unicode object mentioned in your traceback) to a map document object. mxd = arcpy.GetParameterAsText(0) mxd = arcpy.mapping.MapDocument (mxd) ..or more simply: mxd = arcpy.mapping.MapDocument (arcpy.GetParameterAsText(0)) To avoid a subsequent error you'll also need to change this: df = arcpy.mapping.ListDataFrames (mxd) ...back to this: df = arcpy.mapping.ListDataFrames (mxd)[0] ...to get the first data frame in the data frame list. Thank you Brad, The script tool ran with no errors after making the suggested corrections. However, the MXD I am saving did not appear in the location I saved it. In fact, it did not appear anywhere. In the script tool I am saving a new MXD. I have it set as an ArcMap Mapping Document and as an output parameter. I have tried saving it as "Test" and "Test.mxd", but it does not seem to exist. Any ideas? The output MXD path would actually be an input for the script, so try setting it to an input parameter. Also be sure to use the full path to where you want the MXD stored so you know exactly where it ends up. I tried setting the parameter as an input. The problem is when I tried to save the new MXD, it came back with an error saying that did not exist, so I chose to use an existing MXD, however it did not seem to work. Boggles my mind that with relative paths this script works, but trying to turn it into a script tool it gives me nothing but issues.
STACK_EXCHANGE
using Dapper; using Microsoft.AspNetCore.Http; using Microsoft.Data.SqlClient; using Microsoft.EntityFrameworkCore; using Microsoft.Extensions.Caching.Memory; using Quilti.DAL; using Quilti.Dtos; using Quilti.Models; using System; using System.Collections.Generic; using System.Data; using System.Linq; using System.Threading; using System.Threading.Tasks; namespace Quilti.Managers { public static class PatchManager { // We start from the origin patch (0x0) iteraritng out a ring of patches at a time until we find an empty patch // We return the patch which neighbors it as our starting seed patch for the front end public static Patch GetNextAvailablePatch(QuiltiContext context, IMemoryCache cache) { // Pull out the last ringNumber used (or fall back to starting at 0) var cacheKey = "lastRingNumber"; cache.TryGetValue(cacheKey, out int ringNumber); Patch returnPatch = null; bool lastPatchFound = false; while (!lastPatchFound) { var currentRingPatches = context.Patches .Where( p => ( (Math.Abs(p.X) == ringNumber || Math.Abs(p.Y) == ringNumber) && (Math.Abs(p.X) <= ringNumber && Math.Abs(p.Y) <= ringNumber) ) ) .ToList(); // Each ring has more patches in it than the one before, do the math on how many patches this particular ring should have // 0 - 1 // 1 - (3x3) - (1x1) // 2 - (5x5) - (3x3) // n - (2*n+1)^2 - (2*n-1)^2 var fullRingCount = (ringNumber == 0) ? 1 : Math.Pow(2 * ringNumber + 1, 2) - Math.Pow(2 * ringNumber - 1, 2); if (currentRingPatches.Count == fullRingCount) { // Our current ring is full, set the returnPatch now in case the next ring is empty returnPatch = currentRingPatches.First(); ringNumber++; } else if (currentRingPatches.Count == 0) { // Our current ring is empty, just get out now and we'll return with the first patch from the previous ring lastPatchFound = true; } else { // Generate a list of all the potential patch Ids (filled or not) that exist in this ring // It's important we generate these in this particular order so that any given item is a geometric neighbor // to the one before and after var potentialPatchIds = new List<string>(); var ringRange = Enumerable.Range(ringNumber * -1, ringNumber * 2 + 1); // Top row foreach (var i in ringRange) { var j = ringRange.Last(); potentialPatchIds.Add(i + "x" + j); } // Right column foreach (var j in ringRange.Reverse()) { var i = ringRange.Last(); potentialPatchIds.Add(i + "x" + j); } // Bottom row foreach (var i in ringRange.Reverse()) { var j = ringRange.First(); potentialPatchIds.Add(i + "x" + j); } // Left Column foreach (var j in ringRange) { var i = ringRange.First(); potentialPatchIds.Add(i + "x" + j); } potentialPatchIds = potentialPatchIds.Distinct().ToList(); // Iterate through the potential patch Ids and find the missing one, returning the patch before it // Making sure we've encountered at least one Patch in this ring first so we're not returning anything // on the previous ring var haveEncounteredPatch = false; foreach (var patchId in potentialPatchIds) { var nextInList = currentRingPatches.FirstOrDefault(p => p.PatchId == patchId); if (nextInList == null && haveEncounteredPatch) { break; } else if (nextInList != null) { returnPatch = nextInList; haveEncounteredPatch = true; } } lastPatchFound = true; } } cache.Set(cacheKey, ringNumber, TimeSpan.FromDays(7)); return returnPatch; } public static async Task DeletePatch(QuiltiContext context, IMemoryCache cache, string patchId) { var patchImage = context.PatchImages.FirstOrDefault(p => p.PatchId == patchId); context.PatchImages.Remove(patchImage); var patch = context.Patches.FirstOrDefault(p => p.PatchId == patchId); context.Patches.Remove(patch); await context.SaveChangesAsync(); cache.Remove($"Patch_{patch.PatchId}"); } public static Patch GetPatch(QuiltiContext context, IMemoryCache cache, string patchId) { var patch = cache.GetOrCreate($"Patch_{patchId}", entry => { entry.SlidingExpiration = TimeSpan.FromHours(1); return context.Patches.AsNoTracking().First(p => p.PatchId == patchId); }); return patch; } public static bool PatchExists(QuiltiContext context, string patchId) { var patch = context.Patches.FirstOrDefault(p => p.PatchId == patchId); return patch != null; } public static async Task<string> ReservePatch(QuiltiContext context, string creatorIp, string patchId) { var coordinates = Helper.PatchIdToCoordinates(patchId); var newPatch = new Patch() { PatchId = patchId, X = coordinates.X, Y = coordinates.Y, CreatorIp = creatorIp, ObjectStatus = ObjectStatus.Reserved }; context.Patches.Add(newPatch); await context.SaveChangesAsync(); return newPatch.PatchId; } public static List<string> GetPatchIdsInRange(QuiltiContext context, int leftX, int rightX, int topY, int bottomY) { var patches = context.Patches .Where(p => p.X >= leftX && p.X <= rightX && p.Y >= bottomY && p.Y <= topY) .Select(p => p.PatchId) .ToList(); return patches; } public static async Task<string> CompletePatch(QuiltiContext context, IMemoryCache cache, string patchId, string imageMini, string image) { var patch = context.Patches.First(x => x.PatchId == patchId); // Patch Image var patchImage = new PatchImage() { PatchId = patch.PatchId, Image = image }; context.PatchImages.Add(patchImage); // Patch patch.ImageMini = imageMini; patch.ObjectStatus = ObjectStatus.Active; await context.SaveChangesAsync(); // Return cache.Remove($"Patch_{patch.PatchId}"); return patch.PatchId; } public static async Task ClearOutOldReservedPatches(QuiltiContext context, IMemoryCache cache) { var oneHourAgo = DateTimeOffset.Now.AddHours(-1); var patchesToClear = context.Patches.Where(p => p.ObjectStatus == ObjectStatus.Reserved && p.LastModifiedDate < oneHourAgo).ToList(); context.Patches.RemoveRange(patchesToClear); await context.SaveChangesAsync(); foreach (var patch in patchesToClear) { cache.Remove($"Patch_{patch.PatchId}"); } } public static bool PatchMatchesCreator(QuiltiContext context, IMemoryCache cache, string patchId, string creatorIp) { var patch = GetPatch(context, cache, patchId); return patch.CreatorIp == creatorIp; } } }
STACK_EDU
Availability of createUseQueryOptions and createUseInfiniteQueryOptions in v1 Is the absence of createUseQueryOptions and createUseInfiniteQueryOptions in v1 intentional? These methods appear to be extremely important when React context is not available. https://github.com/connectrpc/connect-query-es/blob/main/packages/connect-query/src/index.ts#L19-L24 The README mentions createUseQueryOptions: What if I have a custom Transport? If the Transport attached to React Context via the TransportProvider isn't working for you, then you can override transport at every level. For example, you can pass a custom transport directly to the lowest-level API like useQuery or createUseQueryOptions. Does this only work with React? Connect-Query does require React, but the core (createUseQueryOptions) is not React specific so splitting off a connect-solid-query is possible. (README Link: https://github.com/connectrpc/connect-query-es?tab=readme-ov-file#does-this-only-work-with-react) Is there an alternative method now, or was their removal unintentional? If it's an unintentional, I'm willing to help with a PR. This was an intentional change to reduce the API surface and create a more focused API. That said, I would be open to exposing these APIs again given a specific use case that can't be served otherwise. We did expose a helper, callUnaryMethod which can often be used similarly, just without being specific to infiniteQuery or query I guess I also forgot to mention createConnectQueryKey(), which provides the queryKey. Thank you for your response. I understand now that it was an intentional change. First, we will try using callUnaryMethod in our product and see if there are any issues. For now, I will close this issue and if anything comes up, we will create a new one! I think it would be nice if these were exposed. They are useful for building custom hooks but also for some APIs such as queryClient.ensureQueryData await queryClient.ensureQueryData( createUseQueryOptions( rpc, { input }, { transport, } ) ); While I could implement this with the unary helpers it ends up being the same code over and over again and the code is basically a copy paste of the internal API. In our codebase, this kind of pattern arose too, but we ended up going down the path of a custom QueryClient, which provided typesafe variants of the APIs, so invalidateQueries became invalidateConnectQueries with appropriate args. Ditto with setConnectQueryData. It's still a little early within our codebase to publish it, but I personally think it provides a better DX and could even eliminate the need (but keep an option for) the transport. Obviously one does not negate the other but in the spirit of reducing the number of ways to do things, the custom client feels more directed and adds more value than just removing helper code. If that would be of interest to you, I can open a tracking issue for that work and we can get a little feedback from the ecosystem in a slightly more visible way. I'm in the situation that I want to call "context.queryClient.ensureQueryData" (in Tanstack Router's loader as described here https://tanstack.com/router/latest/docs/framework/react/guide/external-data-loading#a-more-realistic-example-using-tanstack-query) But I'm a bit confused as to what the recommendation is right now? Do I extend the query client? But I'm not sure how to do this, is there an example? I've tried to copy the createUseQueryOptions from the codebase but it is quite a lot it seems, or maybe 'm doing it wrong as well? Imo it's probably pretty common to use Tanstack query in router data "loaders" where the react context is not available. So the way we are creating our own custom query client is like so: import { QueryClient as TSQueryClient } from "@tanstack/react-query"; import { callUnaryMethod, createConnectQueryKey, defaultOptions, DisableQuery, disableQuery, TransportProvider, useInfiniteQuery, useMutation, createProtobufSafeUpdater, useTransport, ConnectQueryKey } from "@connectrpc/connect-query"; import { CallOptions, ConnectError, Transport } from "@connectrpc/connect"; import { AnyMessage, Message, MethodInfoUnary, PartialMessage, ServiceType, } from "@bufbuild/protobuf"; export class QueryClient extends TSQueryClient { invalidateConnectQueries<I extends Message<I>, O extends Message<O>>( methodSig: MethodSignature<I, O>, input?: PartialMessage<I>, options?: InvalidateOptions ) { return this.invalidateQueries( { queryKey: createConnectQueryKey(methodSig, input), }, options ); } setConnectQueryData<I extends Message<I>, O extends Message<O>>( methodSig: MethodSignature<I, O>, updater: ConnectUpdater<O>, input?: typeof disableQuery | PartialMessage<I> | undefined, options?: SetDataOptions | undefined ) { return this.setQueryData( createConnectQueryKey(methodSig, input), createProtobufSafeUpdater(methodSig, updater), options ); } } As for the eslint rule @tanstack/query/exhaustive-deps, it's a valid point. Currently we don't use the transport as part of the key, which means that if the transport changes then related data doesn't auto fetch again. Not actually sure how we'd want to solve that one, since the transport itself can't really be serialized into a string (it's a set of functions) and thus cannot be a part of the query key. Ah right, the query client is just a class. That make sense, thank you For the cache key not including the transport. Am I correct that it only means that the cache key might overlap if a project uses different transports with the same method signatures in the same project? Yes, that is correct. I'm not sure how realistic that scenario is, but it is possible. Fixed in https://github.com/connectrpc/connect-query-es/releases/tag/v1.3.0
GITHUB_ARCHIVE
2023 Oct 7 I gave a public talk about black holes in the NTU alumni association. 2023 June 14 我的研究團隊所參與的國際大型觀測計畫 DESI 暗能量光譜儀,首次公開近200萬筆天體光譜數據 新聞稿:探索宇宙首部曲!臺大和清大天文學家參與DESI暗能量光譜儀計畫 首次公開近200萬筆天體光譜數據 [連結] Press release: First episode of a new cosmic exploration! Astronomers from National Taiwan University and Tsinghua University participate in the Dark Energy Spectroscopic Instrument Project Nearly 2 million new astronomical spectra were published 2023 May 19-21 My group members participate 2023 annual ASROC meeting and contribute two posters and three oral presentations with topics covering mysterious molecules by Chih-Yuan, 3D gas map of the Milky Way by Bo-An, metallicity of dwarf galaxies by Yu Voon,, the CGM and radio-mode feedback by Yu-Ling, and AGN accretion disk by Ji-Jia! 2022 Oct 15 I gave an outreach talk to high school students. It was about how astrophysicists probe the origin and the expansion of the Universe! 2022 August 18 A series of DESI papers about how the DESI survey was designed, operated, and validated is out today! I am fortunate to lead one of the papers and contribute to the DESI project. 2023 Jan 26 update - The galaxy VI paper is now published in ApJ. A HST archival project --- A ULLYSES Survey of the Magellanic Clouds: a Laboratory for the Physics of Interfaces between Hot and Cold Gas --- that I participate as a Co-Investigator is approved! 2022 June 21 Chih-Yuan got the MOST undergrad research funding (科技部大專學生研究計畫) for his project, Searching for Diffuse Interstellar Bands Outside Galaxies. Congratulations, Chih-Yuan! 2022 Jan 26 My MOST proposal "Unveiling the Physical Processes Driving Galaxy Evolution with the Dark Energy Spectroscopic Instrument Survey" (運用DESI計畫大數據探究星系演化的物理機制) is approved! Thank you for your support, MOST. 2022 Jan 15 My joint proposal with NTU high-energy group "Synergies of Astrophysics and Particle Physics" (天文與粒子物理的協同研究) for the NTU core consortium program is approved! Thank you for having me, NTU high-energy group, and for your support, NTU! 2022 Jan 13 DESI has already collected 7.5 million spectra of galaxies --- an amount more than all the previous spectroscopic surveys combined together! A slice through the 3D map of galaxies from the first few months of the DESI survey. The Earth is at the center, with the furthest galaxies plotted at distances of 10 billion light-years. Each point represents one galaxy. This version of the DESI map shows a subset of 400,000 of the 35 million galaxies that will be in the final map. Image credit: D. Schlegel / DESI / Lawrence Berkeley National Laboratory / M. Zamani, NSF’s NOIRLab. 2021 July 24 I am awarded as a Yushan young scholar (2021-2026) by the Ministry of Education in Taiwan. 2021 July 2 I wrote an article about my academic experience and some suggestions for those who would like to pursue a career in academia, especially in astronomy. 2021 May 17 DESI starts its main survey operation TODAY! 3D visualization: https://data.desi.lbl.gov/public/epo/desi3d/ DESI will collect more than 30,000,000 spectra of stars, galaxies, and quasars and map the 3D distribution of matters in the Universe. I can not wait to work on such an amazing dataset! Over the past year, I have been leading the effort of building truth redshift tables of galaxies from the survey validation dataset and using the tables to better understand the target selection, redshift completeness and the performance of the pipeline. I am really happy to be part of this survey. 2021 March 11 I am doing a remote observation with the Harlan J. Smith 2.7m Telescope at the McDonald Observatory. 2021 Jan 29 I got my giant radio galaxy paper accepted by MNRAS. This is my 10th first-author paper! This work is done in collaboration with Prof. Xavier Prochaska at UCSC.
OPCFW_CODE
My Arduino Line Following Robot !!! - With Pololu QTR-6A IR Reflectance Sensor Array I used an Arduino Duemillanove with the ATMega 328. Propulsion is provided by the two Parallax Futaba Continuous Rotation Servos. My sensor is the Pololu QTR-6A IR Reflectance Sensor Array, and it is all powered off 4 rechargeable NiMH Duracell AA Batteries :) It can follow a dark like, on a light background. In this case i used black tape on a whiteboard. It first calibrates itself for 5 seconds. You move it across the line a few times so it gets used to the difference in reflectance. After the calibration it begins moving foward. I used an algorithm to determine its error off the line. If it determines through the algorithm that it is at an extreme error, it will turn for a longer amount of time. Similarly, if the robot determines it is only a fraction of an inch off the line, it will only turn for a fraction of a second. This reduces over compensation and makes the line following a little smoother and more reliable. This is the code I used, I started it from scratch and added the MegaServo Library. I'm aware that there is a library for the Pololu IR sensor arrays, but I encountered problems so i decided to start from scratch with the sensor reading as well. I'm using the Analog version of the Pololu sensor array, as opposed to the RC version, which outputs a digital signal. My sensors output an Analog voltage based on the reflectance of the surface. For example, if you are providing 5V to the sensors at Vcc, and you encounter a dark surface, that sensor will output a a voltage closer to 5V. Conversely, if the sensor encounters a very reflective, (white surface) it will output closer to 0V. I can read these 6 Analog outputs from my 6 sensors through the 6 Analog I/O pins on my Arduino. In addition, my STOP algorithm uses nested if statements to check 3 times if the robot is really at the end, before it stops for 10 seconds, blinking the light. This prevents an accidental stop in the middle of the track due to inaccurate readings or glitches. During calibration, I calculated an average value of reflectance which i use later on to help with the navigation and decision making. I also printed some data to the Serial screen for testing purposes. Here's my code: // ========================================================== // Feel free to change and use this code, but please give me credit. // Author: Austin Duff - June 24, 2009 // ========================================================== #include <PololuQTRSensors.h> #include <Servo.h> #include <MegaServo.h> #define NBR_SERVOS 3 #define FIRST_SERVO_PIN 2 Servo left; Servo right; Servo tower; MegaServo Servos[NBR_SERVOS] ; int pingPin = 7; int mid = 0; int mn = 0; int mx = 0;
OPCFW_CODE
Help to solve issue I've been using this kit for a while without a problem: Windows 10 Pro, WSL2 fedoraremix distribution VPN Cisco AnyConnect mobility 4.10.x The drill was to launch the fedoraremix and afterwards launch wsl-vpnkit using: wsl.exe -d wsl-vpnkit service wsl-vpnkit start from inside a script in the fedoraremix distribution. For two days now, this is not working anymore. I'm not sure wheter is some kind of policy enforcement from my corporation. What I've noticed: Connection from WSL2 to internet with the VPN off is possible. However is not practical for remote working. As soon as I launch the vpn, outbound connections are lost. Cant' resolve names is one of the messages I've saw using ping. I don't remember the exact version 0.2.x I was using (I've been switching from the latest to 0.2.5 these days trying to fix it) What I've noticed: v0.2.x the eth1 interface exists but with no ip address assigned. (at least not shown) v0.3.2 No eth1 interface. However eth0 has two similar ips: 4: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:15:5d:92:c9:ce brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65521 numtxqueues 64 numrxqueues 64 gso_max_size 62780 gso_max_segs 65535 inet <IP_ADDRESS>/20 brd <IP_ADDRESS> scope global eth0 valid_lft forever preferred_lft forever inet <IP_ADDRESS>/24 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::215:5dff:fe92:c9ce/64 scope link valid_lft forever preferred_lft forever So I don't know if it has to do with permissions, firewall, policies or something like it. I also don't know how to properly debug the issue and find out where the traffic is being blocked. My network expertise is low. Appreciate any help Update; My user has again admin privileges and even with 0.3.x I'm able to connect again via VPN. dev eth1 shows again with ip a command: 6: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:15:5d:08:a8:b5 brd ff:ff:ff:ff:ff:ff inet <IP_ADDRESS>/20 brd <IP_ADDRESS> scope global eth0 valid_lft forever preferred_lft forever inet <IP_ADDRESS>/24 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::215:5dff:fe08:a8b5/64 scope link valid_lft forever preferred_lft forever 7: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000 link/ether 5a:94:ef:e4:0c:ee brd ff:ff:ff:ff:ff:ff inet <IP_ADDRESS>/24 scope global eth1 valid_lft forever preferred_lft forever inet6 fe80::5894:efff:fee4:cee/64 scope link valid_lft forever preferred_lft forever That didn't happen with an unpriveleged user. I would like to understand the reason although as long I can use it that would be good enough
GITHUB_ARCHIVE
using CSharpCrawler.Model; using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Xml.Linq; using System.Xml.XPath; namespace CSharpCrawler.Util { public class ConfigUtil { private const string ConfigPath = "./config/0_0.xml"; private XDocument doc = null; public ConfigUtil() { } public ConfigStruct LoadConfig() { ConfigStruct configStruct = new ConfigStruct(); FetchUrlConfig urlConfig = new FetchUrlConfig(); FetchImageConfig imageConfig = new FetchImageConfig(); CommonConfig commonConfig = new CommonConfig(); List<Theme> themeList = new List<Theme>(); try { doc = XDocument.Load(ConfigPath); XElement eleUrl = doc.Root.Element("FetchUrl"); XElement eleImage = doc.Root.Element("FetchImage"); XElement eleCommon = doc.Root.Element("Common"); IEnumerable<XElement> eleThemeList = doc.Root.XPathSelectElements("ThemeList/Theme"); urlConfig.Depth = eleUrl.Element("Depth").Value; urlConfig.IgnoreUrlCheck = eleUrl.Element("IgnoreUrlCheck").Value == "1" ? true : false; urlConfig.DynamicGrab = eleUrl.Element("DynamicGrab").Value == "1" ? true : false; imageConfig.Depth = eleImage.Element("Depth").Value; imageConfig.IgnoreUrlCheck = eleImage.Element("IgnoreUrlCheck").Value == "1" ? true : false; imageConfig.DynamicGrab = eleImage.Element("DynamicGrab").Value == "1" ? true : false; imageConfig.MaxResolution = eleImage.Element("MaxResolution").Value; imageConfig.MinResolution = eleImage.Element("MinResolution").Value; imageConfig.MinSize = Convert.ToInt32(eleImage.Element("MinSize").Value); imageConfig.MaxSize = Convert.ToInt32(eleImage.Element("MaxSize").Value); imageConfig.FetchMode = Convert.ToInt32(eleImage.Element("FetchMode").Value); commonConfig.UrlCheck = eleCommon.Element("UrlCheck").Value == "1" ? true : false; foreach (var item in eleThemeList) { Theme theme = new Theme() { Background = item.Element("Background").Value , BackgroundType = (BackgroundType)int.Parse(item.Element("Background").Attribute("Type").Value) }; themeList.Add(theme); } configStruct.ImageConfig = imageConfig; configStruct.UrlConfig = urlConfig; configStruct.CommonConfig = commonConfig; configStruct.ThemeList = themeList; } catch (Exception ex) { } return configStruct; } public bool SaveIgnoreUrlCheck(bool value) { try { var ele = doc.XPathSelectElement("Crawler/FetchUrl/IgnoreUrlCheck"); if (ele != null) { ele.Value = value == true ? "1" : "0"; doc.Save(ConfigPath); } return true; } catch (Exception ex) { return false; } } } }
STACK_EDU
Our website ► Our website ► https://mining.ecos.am/ ?Our social networks: Twitter - https://twitter.com/ecosmining Telegram - https://t.me/EcosCloudMining Facebook - https://www.facebook.com/ecosmining/ LinkedIn - https://www.linkedin.com/company/1917... Medium - https://medium.com/@ecosmining Instagram - https://www.instagram.com/ecos_mining VK - https://vk.com/ecoscloudmining ECOS is an innovative IT company based in Armenia, in the Free Economic Zone. We have our own data center, where we provide services of cloud mining as well as sell mining devices (ASICs) right from our warehouse and co-locate them on our hosting site. #phonemining #minefromandroid #bitcoin #crypto #btc #mining Nice information about mining I hope this real mining and trust I can't registration in website ?? Want more great ?? Want more great content? Get the Merch or CryptoSlo Token CryptoSlo Merch: https://teespring.com/stores/cryptoslo-youtube-official CryptoSlo TRC-10 Token: https://tronscan.org/#/token/1002224 Subscribe and Hit that Bell Notification for all the Latest ???? Community ???? SubScribe Now : https://www.youtube.com/HOWHEDOIT?sub_confirmation=1 If You’re Not talking Gains, Then were not Talking. :-) ❗️❗️❗️ DISCLAIMER ❗️❗️❗️ This YouTube Channel does not or in anyway represent itself as a registered broker, analyst, investment advisor or anything of that sort. Everything that we provide on this site is purely for guidance, informational and educational purposes. All information contained herein should be independently verified and confirmed. We do not accept any liability for any loss or damage whatsoever caused in reliance upon such information or services. Please be aware of the risks involved with any trading done in any financial market. Do not trade with money that you cannot afford to lose. When in doubt, you should consult a qualified financial advisor before making any investment decisions. Trading and investing in cryptocurrencies (also called digital or virtual currencies, crypto assets, altcoins and so on) involves substantial risk of loss and is not suitable for every investor. The valuation of cryptocurrencies and futures may fluctuate, and, as a result, clients may lose more than their original investment. The highly leveraged nature of futures trading means that small market movements will have a great impact on your trading account and this can work against you, leading to large losses or can work for you, leading to large gains. If the market moves against you, you may sustain a total loss greater than the amount you deposited into your account. You are responsible for all the risks and financial resources you use and for the chosen trading system. You should not engage in trading unless you fully understand the nature of the transactions you are entering into and the extent of your exposure to loss. If you do not fully understand these risks you must seek independent advice from your financial advisor. All trading strategies are used at your own risk. Edited With : Camtasia #cryptoslo #cryptocurrency #bitcoin How to register for RVN? Thanks again! You have been the best source for Ravencoin news! ECOS Mining is offering ECOS Mining is offering all Crypto Rick Viewers a FREE mining contract! Also when you sign up please come back and comment here. In a week or 2 I will select a winner for receive a free 1/TH Mining contract for 1 year and $0 Service fee. Mike lowbar is the username $5,000 confirmed thanks to Lordnikon_57 on Ig. He's trustworthy i actually think thet ecos is a scam. I have done a small video to explain that
OPCFW_CODE
The Pros and Cons of Monolithic VS Microservice Architectures Comparing monolithic and microservices architecture is useful in software development. The microservices architecture trend in app development has been very successful for several reasons. Monolithic architecture, on the other hand, has long been the standard. When comparing monolithic vs microservices architecture, there are some notable differences. For one, while monolithic architecture can also result in reliable solutions, microservices architecture provides enterprises with measurable benefits. Moreover, though both techniques for software development can result in excellent outcomes, their resulting structural makeup largely differs from each other. That being said, each architecture comes with various advantages and disadvantages that companies must be aware of. Let’s take a look at each architecture and its advantageous and disadvantageous sides. Monolithic vs. Microservices Architecture Monolithic Architecture. Also referred to as a monolith, monolithic architecture is a single, sizable computing network with one code base that unites all of the business concerns. Typically, the functionality of an application is developed and released as a singularly whole entity. Its singular nature comes with various advantages. Here are some of them. - Easier development. Given that a monolithic architecture contains all of the functionality of an application in a single codebase, it is reasonably easy to comprehend and use. Because of this, managing and maintaining the program over time may also be made simpler for developers to learn. It is also easier for them to work with the code. - Easier deployment. Since monolithic applications have fewer moving elements, managing and maintaining them individually is easier. Overall, a monolithic architecture is simpler to deploy, administer, and maintain than a microservices solution due to its self-contained structure. - Easier testing. It may be simpler to test a monolithic architecture as a whole because all of the code is contained in a single unit. This can facilitate the early detection of bugs and other problems during the development process, thereby saving time and money. While the convenience of monolithic architecture may be convincing, employing the methodology also comes with various disadvantages, including the following. - Difficult scaling. Because the entire application must be scaled at once, monolithic applications are challenging to scale up. This can limit the application’s ability to scale generally and make it difficult to handle sudden increases in traffic or other demands on it. - Tight coupling. In a monolithic architecture, every element of an application is tightly coupled. This coupling means that alterations to one portion of the code may affect other parts of the code. As a result, it may be challenging to make modifications to the application without generating bugs or other problems. - Difficult updating. It can be challenging to update or replace specific parts of a monolithic application without affecting the rest of the code. This is because all of the code is housed in a single unit. This may make it tough to update the application and may also make it challenging to add new functionality or features. - Difficult maintenance. A monolith becomes harder to maintain as it expands over time because the amount and complexity of the code grow. The app’s functionality may change unexpectedly as a result of changes made to one component of a monolith. The good thing for companies and developers is that monolith architecture is not their only option. Though they may be convinced of the pros that come with a monolith’s singular nature, companies can also opt for microservices architecture. Microservices Architecture. An alternative to the structural design of service-oriented architecture, microservices architecture is a design that sets up an application as a collection of well-tuned, loosely linked services that communicate with one another via simple protocols. The architecture offers a framework for developing, launching, and maintaining microservices independently. The multi-compartmental nature of microservices architecture comes with various benefits and advantages. Here are some of them. - Scalable. This style of architecture’s services can be maintained and scaled independently of one another because they are all discrete and loosely coupled. From a cost standpoint, this can be very advantageous because companies would only need to pay for the necessary scalability. - Flexible. The independence of each service allows for the easy addition of new features and technology. There are also no interlocking dependencies, which means that outdated functionality can be removed without worrying about how the removal will affect other application components. - Independent. Each service in a microservice architecture is designed independently of the others. Hence, the development of one service’s processes won’t impact the creation of another. Moreover, resources, like development tools, won’t be dependent on other unnecessary features. Now, what about the cons of using microservices architecture? Here are some things to consider. - Complex deployment. It can be more difficult to create and manage a microservices architecture than a monolithic architecture because it involves developing an application as a collection of independent services. This may make it more complex for engineers to comprehend and work with the code, as well as to debug and troubleshoot problems. - Specialized skills are needed. Microservices demand specialized developers since they are more complicated. If the company wishes to deploy microservices, they need to think about whether the team can handle the difficulties that come with this architectural strategy’s complexity. - Inconsistency with standards. A lack of consistency among the microservices in an application is a possibility because each microservice can be created and maintained by a distinct team. As a result, it may become more difficult to administer and maintain the program over time and to make sure that the microservices are compatible with one another. The choice between a monolithic and a microservices architecture will ultimately be based on the project’s unique requirements and objectives, as well as the expertise and skills of the development team. To choose the best course of action in a specific circumstance, it may be helpful to seek the advice of experts or research thoroughly about which option would best suit the business.
OPCFW_CODE
Windows 10: how do I pin hotmail to taskbar The above technique I posted, is also a good idea for users with small devices and/or very limited disk space(such as Windows Phones or tablets), as they can use the online version of Outlook, Office, and Skype with out installing the software, and just click the "app" shortcut when they need it. Cliff, Sorry, I see what you mean, I can place the shortcut anywhere except on the TaskBar ( it insists in adding it to the Firefox popup menu) Usable but not what was asked for You have to add the prefix explorer.exe<space> to pin it to the taskbar: End of story Well I had given up but the post from Cliff S was too tempting. So I followed his advice to use IE and I almost made it but got Bing logo on taskbar which when expanded took me to OUTLOOK login page from which I was able to get Hotmail. Not good enough !!! So I fiddled and farted and accidently got Outlook logo on taskbar from which I can open Outlook instantly which shows my Hotmail inbox. I consider this and thank all those who contributed. I cannot detail my moves from Bing logo to Outlook logo so I guess I am not contributing much. SORRY Outlook and Hotmail are the same site. They are just two different usernames. Understood Mystere. But now I can click on the Outlook logo pinned to the taskbar and get taken directly to my Hotmail inbox. I could not do that before.. Please could someone explain in plain english how do i set up a new "hotmail account " mine old one do not work and i have been trying to set up a new one but all this jargon does not help a poor old man, i had hotmail for 10 years but i not get it... please help, i had hotmail.com on my windows 10 for some reason i cannot get it now have i gone wrong some where I have had it for years , my mail must be floating around in space now any help please :092: My email address ends in Hotmail.com.I receive emails with no problems,when I compose a new email or forward on emails,more times than not I get the error message "Can`t connect to Outlook please try later"with a CLOSE button. When I click CLOSE... Just loaded Windows 10, And it seems to want to link to a Microsoft account (eg Hotmail.com). Thing is that when i shut down and then start up again it then insists on my Hotmail password to logon. What I want is to turn this off. I want to... Running windows 10 pro here. I open iexplorer and open Hotmail and I went to create a new email to one of my contacts and noticed when I was typing in the email address that it didn't auto display from my contacts. I then...
OPCFW_CODE
Hi, we have a dual NIC ISA 2006 Standard with an Exchange 2003 single server setup. I have published OWA with FBA. It works fine but when i try to change the password after logging in to the FBA page a error occures displaying on page: "An error occurred while trying to change the password. Please contact technical support for your organization." It works if you access the "Change Password" option in OWA -->Options, but not on the owa logon screen!! ISA Server is member of target domain, no AD or DNS problems, the eventlogs is empty. Ideas ? Regards, Henrik Hi! Yes, we have enabled both "change password" and "display notification when password expires". The options do show up on the logon page and you get the option to type in the new password. But when we click "ok" it generates the above message. I had a problem before where people couldn't change their passwords through their workstations only when the password expired could they change it. Someone changed the secuirty settings on AD to not allow users changing their passwords. I'm geussing they can change their password on the local network? If not maybe it could be a the security issue on AD. It really should work! I was testing the password change feature last week and it worked fine. Although, I have to say that it seemed a bit inconsistent. For example, I tried to change the Administrator's password and it would not work. Then I created a new user and that seemed to work. It even allowed me to create a new user, configure the acct to require a password change on first log on, and the password change notification showed in the FBA screen and the password change worked fine. I've got ISA 2006 in a single NIC config. Using LDAPS published to two front-end server with the web farm function. Exchange back-end running on DC and have certsrv installed. I've got forms based authentication with the option to change passwords. The single nic ISA is in a dmz installed on a domain controller with CSS and a DC. We are planning to add another ISA 2006 machine for NLB which will also have DC and CSS running. Everything thing seems fine rpc over https is working and OWA and OMA. Haven't tried active-sync yet but sure it will work. The LDAPS rule is using a normal user account to check the status of the account. When the user tries to change his password it gets denied with the following: "The password supplied does not meet the minimum complexity requirements. Please try again" The user is able to change his password on the desktop without anyproblems. So, to fix this you had to install a CA and enable LDAPS on all of the domain controllers? Even though it was joined to the domain? The URL mentioned using LDAP to authenticate, I am use Active Directory. ISA 2006 Standard / Domain member / LDAPS enabled to allow password. Before enabling LDAP authentication forms logon was fast(half second) after enabling LDAPS authentication same form logon take 15 seconds. Does this sound like a configuration problem or is this normal? I have the same issue, I'm unable to change passwords at the ISA 2006 / OWA login screen due to "complexity issues". I was previously having trouble changing inside OWA but I added /IISADMPWD/* to the paths and made the appropriate IIS directories and registry changes on my Exchange servers. We're using FBA with Active Directory so I don't have LDAPS to implement (or at least, I haven't enabled it). I double-checked that the listener is configured to allow password changing through ISA. The one thing I see that could be getting in the way is our ISA server is joined to a different domain than our internal users log into (and of which Exchange is a part of). It's part of an extranet domain which has one-way trust established with the internal domain. This ISA server was originally intended for our Sharepoint extranet, but I added a third interface and am using it for Exchange also. The ISA server physically has interfaces in the extranet as well as the internal (and thus has rules for each), but it authenticates via the extranet Active Directory. The extranet domain controllers have a one-way trust established to my internal domain controllers. Is there anything else that would need to be configured to allow password changing in this type of a scenario?
OPCFW_CODE