Document
stringlengths
395
24.5k
Source
stringclasses
6 values
We have a freeipa server and some clients. One of the clients runs a (minimal) Docker container with some custom application. The application does user authorization and authentication using PAM. Is there a good way to make PAM delegate all decisions to the host running the Docker conainer? We'd like to avoid configuring the container as a separate freeipa client. Dominik ^_^ ^_^ Yesterday we migrated our dev servers to IPA - to help in the migration, I enabled the allow_all HBAC rule, but despite that, some users get this message: Jul 29 15:56:23 el4966 sshd: Postponed keyboard-interactive for id094844 from 126.96.36.199 port 35552 ssh2 [preauth] Jul 29 15:56:49 el4966 sshd: pam_sss(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=el1921.bc user=id094844 Jul 29 15:56:49 el4966 sshd: pam_sss(sshd:auth): received for user id094844: 6 (Permission denied) < ----- This Jul 29 15:56:52 el4966 sshd: error: PAM: Authentication failure for id094844 from el1921.bc Jul 29 15:56:52 el4966 sshd: Failed keyboard-interactive/pam for id094844 from 188.8.131.52 port 35552 ssh2 Jul 29 15:56:58 el4966 sshd: Postponed keyboard-interactive for id094844 from 184.108.40.206 port 35552 ssh2 [preauth] Jul 29 15:57:00 el4966 sshd: Connection closed by 220.127.116.11 port 35552 [preauth] These are external (AD) users. Weird thing: not all users have this and not everywhere... I tried to remove the LDAP filter on the IPA server -> same thing... I'm running out of ideas... Thanks for your help! Sensitivity: Internal Use Only This e-mail cannot be used for other purposes than Proximus business use. See more on https://www.proximus.be/maildisclaimer I have a FreeIPA setup that trusts an Active Directory domain. I have users who exist in the AD domain, but who are unable to log into Linux systems. The domains are: ad.domain.examaple: the Active Directory domain ipa.ad.domain.example: the FreeIPA domain The user has a SAM-Account-Name of 'user.name' and a userPrincipalName of Here are the log messages I see when one of them tries to log in: ==> krb5_child.log <== (Thu Jul 23 11:08:58 2020) [[sssd[krb5_child]]] [get_and_save_tgt] (0x0020): 1704: [-1765328378][Client 'user.name\@THIRDPARTY.COM(a)IPA.AD.DOMAIN.EXAMPLE' not found in Kerberos database] (Thu Jul 23 11:08:58 2020) [[sssd[krb5_child]]] [map_krb5_error] (0x0020): 1833: [-1765328378][Client 'user.name\@THIRDPARTY.COM(a)IPA.AD.DOMAIN.EXAMPLE' not found in Kerberos database] ==> sssd_ipa.ad.domain.example.log <== (Thu Jul 23 11:08:58 2020) [sssd[be[ipa.ad.domain.example]]] [krb5_auth_done] (0x0040): The krb5_child process returned an error. Please inspect the krb5_child.log file or the journal for more information A bit of research brings me to A UPN suffix has the following restrictions: It must be the DNS name of a domain, but does not need to be the name of the domain that contains the user. It must be the name of a domain in the current domain forest, or an alternate name listed in the upnSuffixes attribute of the Partitions container within the Configuration container. I believe the user account violates the second of these restrictions, in that its suffix (thirdparty.com) is neither in the AD forest, nor is it found in the upnSuffixes attribute of CN=Partitions,CN=Configuration,DC=ad,DC=domain,DC=example in AD. Now the ugly part. I suspect this is just How Things Are Done around here and getting the user's userPrincipalName changed to ad.domain.example will not be So in the meantime, is there any configuration I can do, either on the FreeIPA servers or on the machine where the user needs to log in, to work around the UPN suffix mismatch? I am able to get a TGT for the user with 'kinit user.name(a)AD.DOMAIN.EXAMPLE', so I guess I'm looking for a hypothetical way to tell sssd to map the UPN suffix in the user's domain (thirdparty.com) to ad.domain.example when it tries to get a ticket during user login... I can also ask to get thirdparty.com added to the AD domain's list of UPN suffixes. Can anyone confirm whether this would be sufficient to get sssd to be able to authenticate the user? Sam Morris <https://robots.org.uk/> regardless what officially I do, my Centos is not pulling latest FreeiPA binaries to install, it sticks with 4.7 Since everything is working, it is not a deal-breaker, but still, now that everything is stable and config is absolutely correct, it would be time to upgrade and stay up to date with all fixes. Any help on this topic please? Thanks in advance! all over sudden automounting home shares has stopped working on one of our most important servers. The configuration has not changed at all. Automounting on servers with identical configuration works. What i tried so far: 1) stopping rpcidmapd, rpcgssd, autofs, sssd and restarting the services 2) rebooting the system 3) doing ipa-client-automount --uninstall and reconfiguring it again 4) checking /etc/sysconfig/nfs and /etc/idmapd.conf as well as sssd.conf 5) automount -fv tells me that it attemts to mount /home but nothing happens No matter what I tried I could not get homeshares mounted again. I would highly appreciate any input that brings me one step further. I installed FreeIPA replica on 4.8.4 on CentOS 8 from 4.4.4 from Fedora 25 with `ipa-replica-install --setup-dns --auto-forwarders`, without `--setup-ca` due to errors, which went fine. I do want to install CA though, which failed when I did `--setup-ca` and then later `ipa-ca-install` with the following error: [4/29]: creating installation admin user Unable to log in as uid=admin-freeipa2.infra.opensuse.org,ou=people,o=ipaca on ldap://freeipa.infra.opensuse.org:389 [hint] tune with replication_wait_timeout [error] NotFound: uid=admin-freeipa2.infra.opensuse.org,ou=people,o=ipaca did not replicate to ldap://freeipa.infra.opensuse.org:389 Your system may be partly configured. Run /usr/sbin/ipa-server-install --uninstall to clean up. Obviously I did try try extending the timeout based on that, but I don't think that was helpful in the end, considering the logs produced by the 192.168.47.90 - - [23/Jul/2020:00:25:36 +0000] "GET /ca/rest/account/login HTTP/1.1" 401 994 server process in journal SSLAuthenticatorWithFallback: Authenticating with BASIC authentication SSLAuthenticatorWithFallback: Fallback auth header: WWW-Authenticate=Basic realm="Certificate Authority" SSLAuthenticatorWithFallback: Fallback auth return code: 401 SSLAuthenticatorWithFallback: Result: false and from pki logs Failed to authenticate as admin UID=admin-freeipa2.infra.opensuse.org. Error: netscape.ldap.LDAPException: error result (49) I don't particularly know how to proceed from here, since those errors don't mean much to me. I see however it's not just me having issues with `ipa-ca-install` at least similar to this one (although by the looks of it, the reason is already different ;) Thanks in advance for trying, we have cloned one of the linux server which is having ipa user ac lets say 1. server a 3. server c 4.server a.1 (clone server) one user has been created in server a.1 , b and c i was able to login to from b to c and c to b but when i tried to login to server b to server a.1 or from c to server a.1 getting error as authentication failed error when i dig deep with the cat /var/log/secure getting message as authentication token is no longer valid; new one required (log messages in server c and server b) when i checked the logs in server a.1 (cat /var/log/secure) pam_sss(sshd:auth): authentication success ; logname= uid = 0 euid=0 tty=ssh ruser= rhost= server c user error:pam: user account has expired for USER from server c please help me to fix it
OPCFW_CODE
New version app has been published Hi, Unfortunately there is also problem with accessing panasonic servers with the latest code for me. Does anyone has the same problem? Currently this is the problem: ricsi@ricsi-srv:~/Downloads/trees/pcomfort/python-panasonic-comfort-cloud$ ./pcomfortcloud.py list Traceback (most recent call last): File "./pcomfortcloud.py", line 5, in main.main() File "/home/ricsi/Downloads/trees/pcomfort/python-panasonic-comfort-cloud/pcomfortcloud/main.py", line 202, in main session.login() File "/home/ricsi/Downloads/trees/pcomfort/python-panasonic-comfort-cloud/pcomfortcloud/session.py", line 99, in login self._create_token() File "/home/ricsi/Downloads/trees/pcomfort/python-panasonic-comfort-cloud/pcomfortcloud/session.py", line 132, in _create_token raise ResponseError(response.status_code, response.text) pcomfortcloud.session.ResponseError: Invalid response, status code: 401 - Data: {"message":"New version app has been published","code":4106} Thanks to check. Same here, since a few hours ago. Same here, since a few hours ago. Apparently, some version check logic changed on the Panasonic side. A quick fix: Change the app version number in session.py, for the HTTP header X-APP-VERSION, from 2.00 to 1.9.0 (which is the current version of the PCC app in app stores). Apparently, some version check logic changed on the Panasonic side. A quick fix: Change the app version number in session.py, for the HTTP header X-APP-VERSION, from 2.00 to 1.9.0 (which is the current version of the PCC app in app stores). Apparently, some version check logic changed on the Panasonic side. A quick fix: Change the app version number in session.py, for the HTTP header X-APP-VERSION, from 2.00 to 1.9.0 (which is the current version of the PCC app in app stores). Very cool. The quick fix works for me also. Thanks the fast, great help :) Apparently, some version check logic changed on the Panasonic side. A quick fix: Change the app version number in session.py, for the HTTP header X-APP-VERSION, from 2.00 to 1.9.0 (which is the current version of the PCC app in app stores). Very cool. The quick fix works for me also. Thanks the fast, great help :) Great, thanks, wish I had seen this before I started to sniff my phone :-D Great, thanks, wish I had seen this before I started to sniff my phone :-D Great, thanks, wish I had seen this before I started to sniff my phone :-D Which app do you use for sniffing pcomfort cloud app? I might help in future with similar problems. Great, thanks, wish I had seen this before I started to sniff my phone :-D Which app do you use for sniffing pcomfort cloud app? I might help in future with similar problems. well, I'm only using proxy at my phone to route all the traffic through Fiddler at my computer. This works as long the app itself doesn't do a certificate verification of issuer/domain. well, I'm only using proxy at my phone to route all the traffic through Fiddler at my computer. This works as long the app itself doesn't do a certificate verification of issuer/domain.
GITHUB_ARCHIVE
<?php use \Tsugi\Core\LTIX; use \Tsugi\Util\LTI; use \Tsugi\Util\Net; $sanity = array( 're.findall' => 'You should use re.findall() to extract the numbers' ); // Compute the stuff for the output $code = $USER->id+$LINK->id+$CONTEXT->id; $sample_url = dataUrl('regex_sum_42.txt'); $actual_url = dataUrl('regex_sum_'.$code.'.txt'); $sample_data = Net::doGet($sample_url); $sample_count = strlen($sample_data); $response = Net::getLastHttpResponse(); if ( $response != 200 ) { die("Response=$response url=$sample_url"); } $actual_data = Net::doGet($actual_url); $actual_count = strlen($actual_data); $response = Net::getLastHttpResponse(); if ( $response != 200 ) { die("Response=$response url=$actual_url"); } $actual_matches = array(); preg_match_all('/[0-9]+/',$actual_data,$actual_matches); $actual_count = count($actual_matches[0]); $actual_sum = 0; foreach($actual_matches[0] as $match ) { $actual_sum = $actual_sum + $match; } $sample_matches = array(); preg_match_all('/[0-9]+/',$sample_data,$sample_matches); $sample_count = count($sample_matches[0]); $sample_sum = 0; foreach($sample_matches[0] as $match ) { $sample_sum = $sample_sum + $match; } $oldgrade = $RESULT->grade; if ( isset($_POST['sum']) && isset($_POST['code']) ) { $RESULT->setJsonKey('code', $_POST['code']); if ( $_POST['sum'] != $actual_sum ) { $_SESSION['error'] = "Your sum did not match"; header('Location: '.addSession('index.php')); return; } $val = validate($sanity, $_POST['code']); if ( is_string($val) ) { $_SESSION['error'] = $val; header('Location: '.addSession('index.php')); return; } LTIX::gradeSendDueDate(1.0, $oldgrade, $dueDate); // Redirect to ourself header('Location: '.addSession('index.php')); return; } // echo($goodsha); if ( $LINK->grade > 0 ) { echo('<p class="alert alert-info">Your current grade on this assignment is: '.($LINK->grade*100.0).'%</p>'."\n"); } if ( $dueDate->message ) { echo('<p style="color:red;">'.$dueDate->message.'</p>'."\n"); } ?> <p> <b>Finding Numbers in a Haystack</b> <p> In this assignment you will read through and parse a file with text and numbers. You will extract all the numbers in the file and compute the sum of the numbers. </p> <b>Data Files</b> <p> We provide two files for this assignment. One is a sample file where we give you the sum for your testing and the other is the actual data you need to process for the assignment. <ul> <li> Sample data: <a href="<?= deHttps($sample_url) ?>" target="_blank"><?= deHttps($sample_url) ?></a> (There are <?= $sample_count ?> values with a sum=<?= $sample_sum ?>) </li> <li> Actual data: <a href="<?= deHttps($actual_url) ?>" target="_blank"><?= deHttps($actual_url) ?></a> (There are <?= $actual_count ?> values and the sum ends with <?= $actual_sum%1000 ?>)<br/> </li> </ul> These links open in a new window. Make sure to save the file into the same folder as you will be writing your Python program. <b>Note:</b> Each student will have a distinct data file for the assignment - so only use your own data file for analysis. </p> <b>Data Format</b> <p> The file contains much of the text from the introduction of the textbook except that random numbers are inserted throughout the text. Here is a sample of the output you might see: <pre> Why should you learn to write programs? 7746 12 1929 8827 Writing programs (or programming) is a very creative 7 and rewarding activity. You can write programs for many reasons, ranging from making your living to solving 8837 a difficult data analysis problem to having fun to helping 128 someone else solve a problem. This book assumes that everyone needs to know how to program ... </pre> The sum for the sample text above is <b>27486</b>. The numbers can appear anywhere in the line. There can be any number of numbers in each line (including none). </p> <b>Handling The Data</b> <p> The basic outline of this problem is to read the file, look for integers using the <b>re.findall()</b>, looking for a regular expression of <b>'[0-9]+'</b> and then converting the extracted strings to integers and summing up the integers. </p> <p> <?php httpsWarning($sample_url); ?> <b>Turn in Assignent</b> <form method="post"> Enter the sum from the actual data and your Python code below:<br/> Sum: <input type="text" size="20" name="sum"> (ends with <?= $actual_sum%1000 ?>) <input type="submit" value="Submit Assignment"><br/> Python code:<br/> <textarea rows="20" style="width: 90%" name="code"></textarea><br/> </form> </p> <b>Optional: Just for Fun</b> <p> There are a number of different ways to approach this problem. While we don't recommend trying to write the most compact code possible, it can sometimes be a fun exercise. Here is a a redacted version of two-line version of this program using list comprehension: <pre> import re print sum( [ ****** *** * in **********('[0-9]+',**************************.read()) ] ) </pre> Please don't waste a lot of time trying to figure out the shortest solution until you have completed the homework. List comprehension is mentioned in Chapter 10 and the <b>read()</b> method is covered in Chapter 7. </p>
STACK_EDU
import moment from 'moment'; import { CalendarsService } from './calendars.service'; // Function to make easiest the date comparison errors in the terminal const formatDate = date => moment(date).format('YYYY-MM-DD'); describe(`CalendarsService - Day`, () => { const service: CalendarsService = new CalendarsService(); it('Get a specific date within a civil year: 2025-02-03', async () => { const data = service.getDate({ year: 2025, month: 1, day: 3 }); const date = moment(data.celebrations[0].date); expect(formatDate(date)).toBe('2025-02-03'); }); it('Get a specific date within a liturgical year: 2025-11-30', async () => { const data = service.getDate({ year: 2025, month: 10, day: 30, isLiturgical: true, }); const date = moment(data.celebrations[0].date); expect(formatDate(date)).toBe('2025-11-30'); }); it('Get a specific date within a liturgical year: 2025-11-28', async () => { const data = service.getDate({ year: 2025, month: 10, day: 28, isLiturgical: true, }); const date = moment(data.celebrations[0].date); expect(formatDate(date)).toBe('2026-11-28'); }); it("Get a specific date within a liturgical year that doesn't exist: 2025-11-29", async () => { const data = service.getDate({ year: 2025, month: 10, day: 29, isLiturgical: true, }); expect(data.celebrations.length).toBe(0); }); it('Get yesterday within a civil year', async () => { const data = service.getYesterday(); const date = moment(data.celebrations[0].date); const today = new Date(); const year = today.getUTCFullYear(); const month = (today.getUTCMonth() + 1).toString().padStart(2, '0'); const yesterday = new Date(today.setUTCDate(today.getUTCDate() - 1)) .getUTCDate() .toString() .padStart(2, '0'); expect(formatDate(date)).toBe(`${year}-${month}-${yesterday}`); }); it('Get yesterday within a liturgical year', async () => { const data = service.getYesterday({ isLiturgical: true }); const date = moment(data.celebrations[0].date); const today = new Date(); const year = today.getUTCFullYear(); const month = (today.getUTCMonth() + 1).toString().padStart(2, '0'); const yesterday = new Date(today.setUTCDate(today.getUTCDate() - 1)) .getUTCDate() .toString() .padStart(2, '0'); expect(formatDate(date)).toBe(`${year}-${month}-${yesterday}`); }); it('Get today within a civil year', async () => { const data = service.getToday(); const date = moment(data.celebrations[0].date); const today = new Date(); const year = today.getUTCFullYear(); const month = (today.getUTCMonth() + 1).toString().padStart(2, '0'); const day = today .getUTCDate() .toString() .padStart(2, '0'); expect(formatDate(date)).toBe(`${year}-${month}-${day}`); }); it('Get today within a liturgical year', async () => { const data = service.getToday({ isLiturgical: true }); const date = moment(data.celebrations[0].date); const today = new Date(); const year = today.getUTCFullYear(); const month = (today.getUTCMonth() + 1).toString().padStart(2, '0'); const day = today .getUTCDate() .toString() .padStart(2, '0'); expect(formatDate(date)).toBe(`${year}-${month}-${day}`); }); it('Get tomorrow within a civil year', async () => { const data = service.getTomorrow(); const date = moment(data.celebrations[0].date); const today = new Date(); const year = today.getUTCFullYear(); const month = (today.getUTCMonth() + 1).toString().padStart(2, '0'); const tomorrow = new Date(today.setUTCDate(today.getUTCDate() + 1)) .getUTCDate() .toString() .padStart(2, '0'); expect(formatDate(date)).toBe(`${year}-${month}-${tomorrow}`); }); });
STACK_EDU
The VMware Knowledge Base provides support solutions, error messages and troubleshooting guides Investigating VMware View Composer failure codes (2085204) - You are unable to provision or recompose VMware Horizon View desktops. - Provisioning or recomposing the VMware Horizon View Desktops fails with an error similar to: View composer agent initialization state error (##): ... |0||The policy was applied successfully.| Note: Result code 0 does not appear in View Administrator. The linked-clone machine proceeds to a Ready state, unless a View error outside the domain of View Composer occurs. |1||Failed to set the computer name.| |2||Failed to redirect the user profiles to the View Composer persistent disk.| |3||Failed to set the computer's domain account password.| |4||Failed to back up a user's profile keys. The next time the user logs in to this linked-clone machine after the recompose operation, the OS creates a new profile directory for the user. As a new profile is created, the user cannot not see the old profile data.| |5||Failed to restore a user's profile. The user should not log in to the machine in this state because the profile state is undefined.| |6||Errors not covered by other error codes. The View Composer agent log files in the guest OS can provide more information about the causes of these errors. (For more information on log locations, see the QuickPrep/Sysprep Script and Composer Customization logs section in Location of VMware View log files (1027744).)| For example, a Windows Plug and Play (PnP) timeout can generate this error code. In this situation, View Composer times out after waiting for the PnP service to install new volumes for the linked-clone virtual machine. PnP mounts up to three disks, depending on how the pool was configured: |7||Too many View Composer persistent disks are attached to the linked clone. A clone can have a maximum of three View Composer persistent disks.| |8||A persistent disk could not be mounted on the datastore that was selected when the pool was created.| |9||View Composer could not redirect disposable data files to the non-persistent disk. Either the paging file or the temp files folders were not redirected.| |10||View Composer cannot find the QuickPrep configuration policy file on the specified internal disk.| |12||View Composer cannot find the internal disk that contains the QuickPrep configuration policy file and other OS related data.| |13||More than one persistent disk is configured to redirect the Windows user profile.| |14||View Composer failed to unmount the internal disk.| |15||The computer name that View Composer read from configuration policy file does not match the current system name after the linked clone is initially powered on.| |16||The View Composer agent did not start because the volume license for the guest OS was not activated.| |17||The View Composer agent did not start. The agent timed out while waiting for Sysprep to start.| |18||The View Composer agent failed to join the linked clone virtual machine to a domain during customization.| |19||The View Composer agent failed to execute a post-synchronization script.| |20||The View Composer agent failed to handle a machine password synchronization event.| This error might be transient. If the linked clone joins the domain, the password is correct. If the clone fails to join the domain, restart the operation you performed before the error occurred. If you restarted the clone, restart it again. If you refreshed the clone, refresh it again. If the clone still fails to join the domain, recompose the clone. - Linked clone deploy or recompose results in the error: View Composer agent initialization state error (6): Unknown failure (waited 0 seconds) (1011653) - Deploying or recomposing a View desktop pool fails with the error: View composer agent initialization state error (2) (1021347) - Provisioning VMware Horizon View desktops fails with error: View Composer Agent initialization error (16): Failed to activate software license. (1026556) - Creating or provisioning a VMware View desktop pool fails with the error: View Composer Fault: VMware.Sim.Fault.VcDatastoreInaccessibleFault (2001736) - Provisioning View desktop pool fails with the error: View Composer agent initialization state error (-65535) (2002096) - Provisioning the View desktop pool fails with the error: View Composer agent initialization state error (18): Failed to join the domain (waited nnn seconds) (2006879) - Creating a new desktop pool fails with the error: View Composer agent initialization state error (-1): illegal state (waited 0 seconds) (2009713) - Provisioning or recomposing fails with the error: View Composer agent initialization state error (19): Failed to execute postsync script (2011315) - View desktop customization fails with the error: View Composer agent initialization error (13) more than one persistent disk has the same usage (2013459) - View Manager Admin console displays the error: Error during provisioning: Unexpected VC fault from View Composer (Unknown) (2014321) - Guest customization runs repeatedly on virtual machines deployed to ESXi 5.1 host build 1743533 (2078352) Request a Product Feature To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.
OPCFW_CODE
Cannot continue edit session in remote repositories Trying to test https://github.com/microsoft/vscode/issues/158409... Open a repo on vscode-dev (for me, https://insiders.vscode.dev/github/connor4312/mopidy-azure) Make some changes in the readme.md "Continue Edit Session" and "Reopen in Desktop" The repository on desktop VS Code opens, but edit session changes are not applied Update: I see it's actually in the Edit Sessions view, just not continued automatically by the protocol activation. I can manually resume the edit session once desktop opens and the changes appear. Ah, do you have "workbench.experimental.editSessions.autoResume": "onReload" configured in settings? I do not That will be why--the auto resume behavior requires that setting to be configured in the destination workspace for now, otherwise it would still be a manual step. Tried this and doesn't work for me, do I need a new version of some extension? @DonJayamanne you need to have the pre-releases of Remote Repositories and GitHub Repositories installed in the latest VS Code Insiders. If it doesn't work for you, could you please share the contents of the "Log (Edit Sessions)" output channel? Yes, on pre-release of both of those extensions (in web and desktop). There's nothing in the Log (Edit Sessions) output, However there are errors in the console window Logs log.ts:301 INFO Electron sandbox mode is enabled! log.ts:307 WARN Ignoring the error while validating workspace folder vscode-vfs://github/DonJayamanne/typescript-notebook - No file system provider found for resource 'vscode-vfs://github/DonJayamanne/typescript-notebook' TMScopeRegistry.ts:47 Overwriting grammar scope name to file mapping for scope source.julia. Old grammar file: file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/extensions/julia/syntaxes/julia.tmLanguage.json. New grammar file: file:///Users/donjayamanne/.vscode-insiders/extensions/julialang.language-julia-1.7.6/syntaxes/julia_vscode.json register @ TMScopeRegistry.ts:47 TMScopeRegistry.ts:47 Overwriting grammar scope name to file mapping for scope source.python. Old grammar file: file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/extensions/python/syntaxes/MagicPython.tmLanguage.json. New grammar file: file:///Users/donjayamanne/.vscode-insiders/extensions/magicstack.magicpython-1.1.0/grammars/MagicPython.tmLanguage register @ TMScopeRegistry.ts:47 TMScopeRegistry.ts:47 Overwriting grammar scope name to file mapping for scope source.yaml. Old grammar file: file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/extensions/yaml/syntaxes/yaml.tmLanguage.json. New grammar file: file:///Users/donjayamanne/.vscode-insiders/extensions/ms-toolsai.vscode-ai-0.16.0/syntaxes/yaml/tmLanguage.json register @ TMScopeRegistry.ts:47 TMScopeRegistry.ts:47 Overwriting grammar scope name to file mapping for scope source.r. Old grammar file: file:///Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/extensions/r/syntaxes/r.tmLanguage.json. New grammar file: file:///Users/donjayamanne/.vscode-insiders/extensions/reditorsupport.r-2.5.2/syntax/r.json register @ TMScopeRegistry.ts:47 console.ts:137 [Extension Host] rejected promise not handled within 1 second: Error: ENOENT: no such file or directory, scandir '/DonJayamanne/typescript-notebook' (at console.<anonymous> (/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:85:94751)) y @ console.ts:137 console.ts:137 [Extension Host] stack trace: Error: ENOENT: no such file or directory, scandir '/DonJayamanne/typescript-notebook' (at console.<anonymous> (/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:85:94751)) y @ console.ts:137 log.ts:313 ERR ENOENT: no such file or directory, scandir '/DonJayamanne/typescript-notebook': Error: ENOENT: no such file or directory, scandir '/DonJayamanne/typescript-notebook' mainThreadExtensionService.ts:111 Activating extension 'vscode-icons-team.vscode-icons' failed: ENOENT: no such file or directory, open '/DonJayamanne/typescript-notebook/package.json'. $onExtensionActivationError @ mainThreadExtensionService.ts:111 console.ts:137 [Extension Host] rejected promise not handled within 1 second: Error: command '_codespaces.setActiveRepository' not found (at console.<anonymous> (/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:85:94751)) y @ console.ts:137 console.ts:137 [Extension Host] stack trace: Error: command '_codespaces.setActiveRepository' not found at m._tryExecuteCommand (vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:1696:3532) at m.executeCommand (vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:1696:3414) (at console.<anonymous> (/Applications/Visual Studio Code - Insiders.app/Contents/Resources/app/out/vs/workbench/api/node/extensionHostProcess.js:85:94751)) y @ console.ts:137 log.ts:313 ERR command '_codespaces.setActiveRepository' not found: Error: command '_codespaces.setActiveRepository' not found at m._tryExecuteCommand (vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:1696:3532) at m.executeCommand (vscode-file://vscode-app/Applications/Visual%20Studio%20Code%20-%20Insiders.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:1696:3414) Hm, those errors are all spurious. Do you have the following version of VS Code: Version: 1.71.0-insider (system setup) Commit: 9529e11f6481ae53bba821b05e34549491b9415e Date: 2022-08-25T13:00:37.808Z Electron: 19.0.12 Chromium: 102.0.5005.167 Node.js: 16.14.2 V8: <IP_ADDRESS>-electron.0 OS: Windows_NT x64 10.0.22000 Sandboxed: Yes Or equivalently do you have the following settings "workbench.experimental.editSessions.enabled": true, "workbench.experimental.editSessions.autoResume": "onReload", Do you have the following version of VS Code: Yes Or equivalently do you have the following settings Yes, Oh--have you run the Edit Sessions: Sign In command on desktop VS Code before? If not, could you try doing that and then checking to see if the following flow works for you? https://user-images.githubusercontent.com/30305945/186822783-57af717b-b130-4537-bc2e-805d96f35f7e.mp4 We probably need some settings sync-ed memento state to communicate that we should ask for auth here.
GITHUB_ARCHIVE
It is imperative to note this document discusses work with "hardware" modems only. While "soft" modems now dominate the laptop arena as they are driven almost entirely by software with very little traditional modem hardware, thereby reducing the power requirements, they are more difficult to reverse engineer and therefor seldom supported. This HOWTO suggests you use the Network config tool or if that failes, the terminal dialer, "WVDial". WVDial may appear daunting at first glance, but it does what Linux has always provided --more control of your hardware with decent feedback. So if all else fails in the graphical environment, take WVDial for a spin. Use the Network configuration tool: YDL Menu --> System Settings --> Network ... and follow the on-screen configuration wizard. WVDial is in fact the foundation software used by the graphical modem dialer programs. It is simple, easy to use, and straight-forward. However, it does not conduct any configuration for you nor does it necessarily know which set of arguments will work best for your modem. In this HOWTO, I will present a default configuration. If you desire or need to change the modem "talk" or "handshaking" configurations, you may visit the website of the manufacturer of the modem, use the KPPP (described above) to test different settings, or discuss this matter in the General Mailing List at lists.terrasoftsolutions.com where others will share their success. - As root, create a new configuration file: nano /etc/wvdial.conf [ENTER]Then add the following to the new file: [Dialer Defaults] Modem = /dev/ttyS2 Baud = 57600 Init = ATZ Init2 = AT S11=50 Phone = xxx-xxxx Username = my-username Password = my-password [Dialer phone2] Phone = xxx-xxxx [Dialer phone3] Phone = xxx-xxxx [Dialer shh] Init3 = ATM0 [Dialer pulse] Dial Command = ATDPWhere the default "Phone" number may be your local dialup when not traveling and "phone2" may be a number in a different city. In this case, you can change the name "Phone 2" to "sanfran" and add a "9, xxx-xxxx" in the actual Phone number listing to dial out on a hotel line. You can add a very long list of numbers without any problems. - Save and Exit nano per the instructions on the bottom of the screen. - From a terminal or KDE/Gnome shell, as root: wvdial phone2 [ENTER]The author of WVDial has a good sense of humor about modems, so laugh along and hope for the best, as the author does for you. If the connection fails (due to a bad ID or passwd), you may experience one bug in WVDial which is that it may not allow you to try again until you have killed off the process or rebooted your computer. But once all is setup, you will connect and maintain a solid connection. As WVDial does not provide audible feedback during the connection process, you may want to monitor the "messages" from a second shell. To do this, as root: tail -f /var/log/messages [ENTER]As WVDial negotiates the connection you will observe some indication of this process. But once connected, the IP address of your local machine as well as the gateway will be displayed. As long as you are connected, these will remain active. If you are unintentionally disconnected, you will observe this in the "messages". - To disconnect, simply press "CONTROL-C" once and WVDial will do its best to disconnect you. /var/log/messsages will display this also. For more information about WVDial, please refer to the "man pages" as follows: man wvdial [ENTER] A user adds his experience ... Michael Ahearn writes, "I can connect with my modem! Here's what I did:" - I added th following to "/etc/modules.conf" as par a few newsgroup entries alias /dev/ppp ppp_generic alias char-major-108 ppp_generic alias tty-ldisc-3 ppp_async alias tty-ldisc-14 ppp_synctty alias ppp-compress-21 bsd_comp alias ppp-compress-24 ppp_deflate alias ppp-compress-26 ppp_deflate - Then ran these commands as root (first I turned on logging for kppp and looked at the error - both kppp log and newsgroups suggested these commands): mknod /dev/ppp c 108 0 [ENTER] chmod 600 /dev/ppp - There was also a suggestion to do "cat /dev/ttyS0" and watch the dev. All this seems to have worked. I connect just fine and get an IP address (although it seems as though I have to cat /dev/ttyS0 everytime I want to connect). This HOWTO was prepared by Kai Staats, Terra Soft
OPCFW_CODE
Diversity hiring has become a beauty contest for companies, here are the true benefits of a diverse team and how you can achieve it. A couple of weeks ago I saw a post on social media about a team that won an ‘award’ for being the most diverse team. For me that was the perfect example of what has gone wrong with diversity goals of companies. The diversity hype turned companies into reactive little creatures looking for likes because their team is so extremely diverse. They don't celebrate internally the benefits coming from having a diverse team, but they want to live up to moral standards so they can show their family and friends how much they care. Diversity is most popular in the context of the workplace. But unfortunately diversity has become a way for companies to do well in the public eye, driving superficial motifs (‘look how extremely diverse we are’) which have diminished the true meaning and value of it. The discussion on diversity shouldn’t be about recognition (awarding companies for being diverse), the discussion should be about the necessity for a team and a business to be diverse and the actual benefits it provides. Because of the hype around diversity, it is looked upon in a very superficial way. People think they need people from diverse ethnic backgrounds for example (enough coloured people in the team) because they like to be perceived as a company that cares about diversity. Colour is a popular diversity factor because it is clearly visible. Same for gender. Usually companies hiring for diversity are not aware of the actual benefits of having a diverse team and they have no idea on how to achieve it. Some people also look at diversity as a moral obligation. They feel pressured to do what’s right and therefore they hire people in underrepresented groups. But also this is not leading to any actual benefits of diversity in a team, and when people are hired merely for moral reasons instead of because there is an optimal match between their skills and interests and the job, you're not helping them at all. You shouldn’t give someone a job offer because they are female, gay or black. Nobody wins if you do so. Diversity in essence is being dissimilar (different from each other). In traditional media and on social media diversity is usually presented as a problem of skin colour, sexual orientation and/or gender. But having a diverse team is about a lot more than that. A diverse team is one which differentiates in personal identities and therefore covering a variety of qualities, interests, preferences and beliefs. The true value of diversifying a team is that you create a collective identity that as a whole can signal, analyse and address as many different problems and solutions as possible to get to the best possible outcome. As a cross functional team (a tech product team for example) you really have a problem when you only have people who can think in a process driven, structured, analytical way (the ‘blue’ personality type) and nobody in the team is expressive and imaginative (the ‘yellow’ personality type). The problem that is likely to occur in such a team is that everyone gets lost in the data and detail of things and the team is not able to build something unique that is not only based on historical patterns. Personality is a good example of a differentiating attribute (diversity factor) that is not mentioned regularly in the media. But there are a lot more attributes you can look at in building a diverse team. Diversity factors other than skin colour, sexual orientation or gender: Most of these diversity factors (or identity attributes) are very subjective in nature. People decide for themselves with which personality, viewpoints, religion, etc they identify, but they are also perceived by other people. Other people's perception does not have to be equal to how the the individual perceives themselves. This is partly why you cannot talk about diversity in a superficial way, diversity factors make up someone's identity (self-identity). And identity is one of the most complex topics in psychology and sociology. It's many, many different attributes and different ways of perceiving them. It is, literally speaking, not black and white. Diversity is about recognizing the benefit of variance in the collective identity, the identity of the whole. The value of diversity should not be carried by the discussion in the workplace, it’s a necessity in life. Without gender diversity for example we wouldn’t even have existed; a somewhat equal distribution of males and females means that we can reproduce and continue to exist (in perpetuity hopefully). We need to be a diverse collective in order to survive and thrive. The true benefit of having a diverse team is that you can learn a lot more from each other as opposed to being surrounded by similar people. You need different people to understand different situations, possible solutions and personas. Take your customers for example, they most likely also show a variety in backgrounds, characteristics and preferences. In order to understand them and serve them you should have different people who can resonate with the different identities (or personas) within your target group. Ten Dutch heterosexual balding white men in the forties might be able to give each other great advice on hair loss prevention, but they wouldn't be able to truly sympathize with their diverse customer target group. That diverse target group exists of, next to balding white men, a range of identities including young adults, Asian people, lesbian women and any combination of different attributes. Here are some examples on how much more there is to learn in a diverse team: So if diverse teams are a key differentiator for business, why is not everybody doing it? Because there are some common challenge in creating a truly diverse team. The most recurring ones are (unconscious) bias, the diversity misunderstanding, the network gap and stereotyping. Bias is an inclination or prejudice for or against someone or something. People are naturally biased. Most of the bias is happening unconsciously, you’re not aware that you are prejudiced towards a certain identity (usually one similar to your own). Unconscious bias has been extensively proven in science. For example, fictitious resumes with white-sounding names sent to help-wanted ads were 50% more likely to receive responses for interviews compared to resumes with African-American sounding names. A bias frequently occurring in recruitment is affinity bias, which is the unconscious preference towards people who are like yourself. As described in this blog, a lot of people do not understand what diversity is and how it is beneficial to the team. The consequences of this misunderstanding are that hiring companies give their employees the wrong incentives for diversity hiring, their messaging is off (and sometimes very awkwardly missing the entire point) and the attractiveness of working for the company is not in any sense adapted to a diverse workforce. Teams are partly sourced from the existing network of current employees, whether it's the recruiter or any other team member. If your team is already not divers, there will be a snowball effect in play because your current network primarily exists of people similar to you. Every time you post your open position on social media for example, your similar connections will see it and not those different from you outside of your network. Some things are a fact, but possibly for the wrong reasons. Because engineering is a job that is done predominantly by males, it doesn't mean that males are better at engineering. For most people there is a stronger association between males and engineering job than females and engineering jobs, just because there are currently more male engineers. That stronger association results in people being prone to take action on pursuing male engineers. This male engineer stereotype illustrates, at least partly, a self fulfilling prophecy. So how do we cope with these challenges and what are some practical guidelines we can take into consideration in building a diverse team? From my point of view there are some steps we can take to improve our diversity sourcing efforts. Embrace the power of AI to revolutionize your approach to talent acquisition. This is how to find new passive candidates with an integrated sourcing tool for Ashby.
OPCFW_CODE
The session-based test management is used to manage exploratory testing. Various methodologies have been developed for Agile testing processes. At this stage, QA testers analyze the potential impact created by changing a product section. You’ll make customers happy by delivering reliable products and regular, stable releases. That helps you prioritize features for each iteration and deliver the most important ones first. All of which will improve customer satisfaction and retention rates. The Agile testing approach ensures that the teams test the software so that the code is clean and tight. Testers work as a part of the development team and report on quality issues that can affect end-users, and suggest solutions. The scope of software testing and the role of testers in the process of development is rapidly evolving. Enterprises today focus on delivering quality and releasing products faster. Making the right choice between traditional testing vs. agile testing is essential to accomplishing this. As soon as the user story gets completed on the development side, the testing team steps in — to quality check the software. The developers and testers work in tandem to execute testing in an agile environment. Traditional methods of testing usually involve a structured, step-by-step approach to development with little room for flexibility or innovation. Gile testing methodology has gained a foothold in software development in recent years, replacing older, more structured methods of testing. Agile testing allows developers and testers to work collaboratively with each other, with other departments, and with end users. Customers are involved throughout the development process through customer demos, user story mapping workshops, etc. They also interact with the teams daily to be well aware of the product progress. But Agile testing is a software testing approach that follows the principles and rules of Agile software development. Because of the rules, it helps new software developers to solve problems with ease. Maintaining quality involves a blend of exploratory and automated testing. As new features are developed, exploratory testing ensures that new code meets the quality standard in a broader sense than automated tests alone. The testing begins at the start of the project and there is ongoing integration between testing and development. The common objective of agile development and testing is to achieve a high product quality. On our development teams, QA team members pair with developers in exploratory testing, a valuable practice during development for fending off more serious bugs. The https://globalcloudteam.com/glossary/agile-testing/ quadrants separate the whole process in four Quadrants and help to understand how agile testing is performed. As mentioned above, the definition of done is a key element of Agile testing. The best way to achieve this essential visibility and alignment is to use a dedicated test case management tool like Helix ALM. When you create a user story, you need to define the acceptance criteria. Once development and testing are underway, close communication and collaboration remain important. To successfully implement modern testing practices, you need QA experts who help you work with digital as well as legacy systems for unmatched performance. Moving to once-a-week releases could be beneficial, gradually transitioning to multiple builds per week. Ideally, development builds and testing should happen daily, meaning that developers push code to the repository every day and builds are scheduled to run at a specific time. To take this one step further, developers would be able to deploy new code on demand. To implement this, teams can employ a continuous integration and continuous deployment (CI/CD) process. CI/CD limits the possibility of a failed build on the day of a major release. This is the daily status update meeting that helps to track the testing activities daily. It also allows an opportunity to discuss potential issues or blockers impacting product delivery. You can raise your problems or concerns repeatedly to provide a quick resolution. It is crucial to understand which metrics are required to improve software testing in an Agile SDLC. Apart from defect management, you can also use Agile testing metrics to improve software product quality. This type of testing can be conducted by a tester-tester pair or even a developer-tester pair. For example, if the code for the authentication and login user story is ready, the unit testing will be run to check if the login works as per the expectations. Continuous testing supports these ongoing trends in teams, including DevOps and CI/CD integration. Testing early and often with the help of automation will continue in 2022. Kanban is another widely followed agile practice derived from manufacturing industries. The successful adoption of this framework requires real-time communication and transparency at work. Written in the developer’s language, technology-facing tests are used to evaluate whether the system delivers the behaviors the developer intended. While there isn’t a single formula to follow considering variants in teams’ backgrounds and resources, some standard elements should be factored into an agile testing strategy. The acceptance tests represent a user’s perspective, and specify how the system will function. Like in the BDD approach, acceptance tests are written first, they initially fail, and then software functionality is built around the tests until they pass. Continuous Integration is the key for Agile Development success. Agile testing is a software testing methodology aligned with the principles of agile software development. Agile development emphasizes collaboration, flexibility, and continuous iteration, and it is designed to support these principles by providing a flexible and adaptable approach to testing. Agile is one of the most famous project management frameworks in software development. Agile software testing is a methodology that helps developers test their code continuously and rapidly. This methodology also allows testers to get immediate feedback from customers. Much like code review, we’ve seen testing knowledge transfer across the development team because of this. When developers become better testers, better code is delivered the first time. The goal of agile and DevOps teams is to sustainably deliver new features with quality. However, https://globalcloudteam.com/ traditional testing methodologies simply don’t fit into an agile or DevOps framework. The pace of development requires a new approach to ensuring quality in each build. Take a detailed look at our testing approach with Penny Wyatt, Jira Software’s Senior QA Team Lead. 2021/07/20Thể loại : Software developmentTab :
OPCFW_CODE
Command line project packaging fails in some circumstances due to HttpModule initialization Having a problem building a project that includes Cesium on our continuous integration server. Locally it builds fine, even when using the same exact scripts. Stack trace is the following: LogWindows: Error: [Callstack] 0x00007ffdfc059eb6 UE4Editor-Core.dll!ReportAssert() [D:\Git\unrealengine\Engine\Source\Runtime\Core\Private\Windows\WindowsPlatformCrashContext.cpp:1627] LogWindows: Error: [Callstack] 0x00007ffdfc05d688 UE4Editor-Core.dll!FWindowsErrorOutputDevice::Serialize() [D:\Git\unrealengine\Engine\Source\Runtime\Core\Private\Windows\WindowsErrorOutputDevice.cpp:78] LogWindows: Error: begin: stack for UAT LogWindows: Error: [Callstack] 0x00007ffdfbd6e8fd UE4Editor-Core.dll!FOutputDevice::LogfImpl() [D:\Git\unrealengine\Engine\Source\Runtime\Core\Private\Misc\OutputDevice.cpp:61] LogWindows: Error: === Critical error: === LogWindows: Error: [Callstack] 0x00007ffdfbd05235 UE4Editor-Core.dll!AssertFailedImplV() [D:\Git\unrealengine\Engine\Source\Runtime\Core\Private\Misc\AssertionMacros.cpp:104] LogWindows: Error: LogWindows: Error: [Callstack] 0x00007ffdfbd072d0 UE4Editor-Core.dll!FDebug::CheckVerifyFailedImpl() [D:\Git\unrealengine\Engine\Source\Runtime\Core\Private\Misc\AssertionMacros.cpp:461] LogThreadingWindows: Error: Runnable thread TaskGraphThreadNP 11 crashed. LogWindows: Error: [Callstack] 0x00007ffde80a46dd UE4Editor-HTTP.dll!FHttpModule::Get() [D:\Git\unrealengine\Engine\Source\Runtime\Online\HTTP\Private\HttpModule.cpp:205] LogWindows: Error: Assertion failed: IsInGameThread() [File:D:/Git/unrealengine/Engine/Source/Runtime/Online/HTTP/Private/HttpModule.cpp] [Line: 205] LogWindows: Error: [Callstack] 0x00007ffdce5197dd UE4Editor-CesiumRuntime.dll!<lambda_2dfd0129d23e58fc669b06552b81075a>::operator()<CesiumAsync::AsyncSystem::Promise<std::shared_ptr<CesiumAsync::IAssetRequest> > >() [D:\tests\UnrealSource\Prism\Plugins\CesiumForUnreal\Source\CesiumRuntime\Private\UnrealAssetAccessor.cpp:131] LogWindows: Error: LogWindows: Error: [Callstack] 0x00007ffdce52512f UE4Editor-CesiumRuntime.dll!CesiumAsync::AsyncSystem::createFuture<std::shared_ptr<CesiumAsync::IAssetRequest>,<lambda_2dfd0129d23e58fc669b06552b81075a> >() [D:\tests\UnrealSource\Prism\Plugins\CesiumForUnreal\Source\ThirdParty\include\CesiumAsync\AsyncSystem.h:280] LogWindows: Error: HandleError re-entered. LogWindows: Error: [Callstack] 0x00007ffdce557941 UE4Editor-CesiumRuntime.dll!UnrealAssetAccessor::requestAsset() [D:\tests\UnrealSource\Prism\Plugins\CesiumForUnreal\Source\CesiumRuntime\Private\UnrealAssetAccessor.cpp:128] LogWindows: Error: LogWindows: Error: [Callstack] 0x00007ffdce5d153a UE4Editor-CesiumRuntime.dll!std::basic_istream<char,std::char_traits<char> >::sentry::~sentry() [] LogWindows: FPlatformMisc::RequestExit(1) LogWindows: Error: [Callstack] 0x00007ffdce5d1f2c UE4Editor-CesiumRuntime.dll!std::basic_istream<char,std::char_traits<char> >::sentry::~sentry() [] LogWindows: Error: LogWindows: Error: [Callstack] 0x00007ffdce5cef4b UE4Editor-CesiumRuntime.dll!std::basic_stringbuf<char,std::char_traits<char>,std::allocator<char> >::`vector deleting destructor'() [] LogCore: Engine exit requested (reason: Win RequestExit) LogWindows: Error: [Callstack] 0x00007ffdce5da064 UE4Editor-CesiumRuntime.dll!std::_Ref_count_obj2<CesiumAsync::Impl::AsyncSystemSchedulers>::_Destroy() [] LogWindows: Error: [Callstack] 0x00007ffe512e4b89 KERNELBASE.dll!UnknownFunction [] LogWindows: Error: [Callstack] 0x00007ffdfbb0b583 UE4Editor-Core.dll!TGraphTask<FAsyncGraphTask>::ExecuteTask() [D:\Git\unrealengine\Engine\Source\Runtime\Core\Public\Async\TaskGraphInterfaces.h:886] LogWindows: Error: [Callstack] 0x00007ffdfbb1d968 UE4Editor-Core.dll!FTaskThreadAnyThread::ProcessTasks() [D:\Git\unrealengine\Engine\Source\Runtime\Core\Private\Async\TaskGraph.cpp:1065] LogWindows: Error: [Callstack] 0x00007ffdfbb1ebb0 UE4Editor-Core.dll!FTaskThreadAnyThread::ProcessTasksUntilQuit() [D:\Git\unrealengine\Engine\Source\Runtime\Core\Private\Async\TaskGraph.cpp:888] LogWindows: Error: [Callstack] 0x00007ffdfbb25f35 UE4Editor-Core.dll!FTaskThreadAnyThread::Run() [D:\Git\unrealengine\Engine\Source\Runtime\Core\Private\Async\TaskGraph.cpp:965] LogWindows: Error: [Callstack] 0x00007ffdfc075b3b UE4Editor-Core.dll!FRunnableThreadWin::Run() [D:\Git\unrealengine\Engine\Source\Runtime\Core\Private\Windows\WindowsRunnableThread.cpp:86] LogWindows: Error: [Callstack] 0x00007ffdfc06e6c0 UE4Editor-Core.dll!FRunnableThreadWin::GuardedRun() [D:\Git\unrealengine\Engine\Source\Runtime\Core\Private\Windows\WindowsRunnableThread.cpp:35] LogWindows: Error: [Callstack] 0x00007ffe52877034 KERNEL32.DLL!UnknownFunction [] LogWindows: Error: [Callstack] 0x00007ffe53962651 ntdll.dll!UnknownFunction [] From what I can gather, an async task is being fired by Cesium at UnrealAssetAccessor.cpp:132 which creates an Http request. The problem is that, if it's the first time that we're using the http fetch system, we eventually end up in HttpModule.cpp:201 (FHttpModule& FHttpModule::Get()) which will initialize the module. Issue is that the module needs to be initialized in the game thread, otherwise, the application comes crashing down. Probably just have to preemptively call FHttpModule::Get() during Cesium initialization to make sue that when it's used from a different thread later on, it's already initialized. @tiagomagalhaes I don't totally understand in what circumstances this would be necessary, but I think it's reasonable to add some code to the UnrealAssetAccessor's constructor to make sure the HttpModule is ready to go. Would you be up for opening a pull request like that? It's not as simple as I first thought. It looks like that during cooking, somehow, Cesium is firing off a requestAsset before StartupModule for FHttpModule gets to run. It doesn't happen on my machine which is making debugging harder. I will try to add more information when I have it. Ok, I got it. void FCesiumRuntimeModule::StartupModule() { should call FModuleManager::Get().LoadModuleChecked(TEXT("HTTP")); Looking into this, Epic's ModuleInterface comments use this exact case as an example of what kind of thing should go inside StartupModule. Will open up the pull request with this change. This was fixed in #442.
GITHUB_ARCHIVE
Translations table has two auto_increment columns Laravel Version: 5.4 Voyager Version: 1.0 PHP Version: 5.6.4 Database Driver & Version: 14.14 Distrib 5.6.37, for Linux (x86_64) Description: The generation of the translations table is failing because the SQL statement for it attempts to make two columns auto_increment. > php artisan voyager:install --with-dummy Setting up the hooks Hooks are now ready to use! Go ahead and try to "php artisan hook:install test-hook" Publishing the Voyager assets, database, language, and config files Copied Directory [/vendor/tcg/voyager/publishable/assets] To [/public/vendor/tcg/voyager/assets] Copied Directory [/vendor/tcg/voyager/publishable/database/migrations] To [/database/migrations] Copied Directory [/vendor/tcg/voyager/publishable/database/seeds] To [/database/seeds] Copied Directory [/vendor/tcg/voyager/publishable/demo_content] To [/storage/app/public] Copied Directory [/vendor/tcg/voyager/publishable/lang] To [/resources/lang] Publishing complete. Publishing complete. Migrating the database tables into your application [Illuminate\Database\QueryException] SQLSTATE[42000]: Syntax error or access violation: 1075 Incorrect table definition; there c an be only one auto column and it must be defined as a key (SQL: create table `translations ` (`id` int unsigned not null auto_increment primary key, `table_name` varchar(32) not null , `column_name` varchar(32) not null, `foreign_key` int unsigned not null auto_increment pr imary key, `locale` varchar(11) not null, `value` text not null, `created_at` timestamp nul l, `updated_at` timestamp null) default character set utf8mb4 collate utf8mb4_unicode_ci) [Doctrine\DBAL\Driver\PDOException] SQLSTATE[42000]: Syntax error or access violation: 1075 Incorrect table definition; there c an be only one auto column and it must be defined as a key [PDOException] SQLSTATE[42000]: Syntax error or access violation: 1075 Incorrect table definition; there c an be only one auto column and it must be defined as a key Note: Support staff in the Slack channel had me replace part of the translations migration code: $table->string('table_name')->unique(); $table->string('column_name')->unique(); $table->integer('foreign_key')->unsigned()->unique(); $table->string('locale')->unique(); To: $table->string('table_name', 32)->unique(); $table->string('column_name', 32)->unique(); $table->integer('foreign_key', 11)->unsigned()->unique(); $table->string('locale', 11)->unique(); This change was because I was still receiving "specified key too long" errors despite applying the patch for Laravel 5.4 Any progress on that? This is open for almost a year now I'm not sure this is valid (anymore). The translations table migration does not specify multiple auto-increment fields: https://github.com/the-control-group/voyager/blob/1.1/migrations/2017_01_14_005015_create_translations_table.php. I'm going to close this unless someone else reports the same issue and can identify the cause.
GITHUB_ARCHIVE
How to generate relationship properties through .moor file? I have a .moor file like this: CREATE TABLE Foos ( id TEXT NOT NULL PRIMARY KEY, name TEXT NOT NULL ) AS Foo; CREATE TABLE Bars ( id TEXT NOT NULL PRIMARY KEY, fooId TEXT NOT NULL REFERENCES Foos(id), name TEXT NOT NULL ) AS Bar; Now I need the class Foo to be generated as: class Foo extends DataClass implements Insertable<Foo> { final String id; final String name; final List<Bar> bars; } But it won't generate the bars property =\ I know I can always use Dart to generate the table classes and use .moor only for indexes and queries, but I wonder if some of these would be possible: To automatically generate relationship properties (list of bars in foo and an instance of foo in each bar). To allow we to apply one or more mixins: Let's say we have this mixin: abstract class ListOfBarsMixin { final List<Bar> = List<Bar>(); } And in the .moor file we could: CREATE TABLE Foos ( id TEXT NOT NULL PRIMARY KEY, name TEXT NOT NULL ) AS Foo WITH ListOfBarsMixin; Notice the WITH [ListOfMixins]. That would generate the following: class User extends DataClass with BaseUser implements Insertable<User> { } Allow us to create queries and indexes outside .moor files (i.e.: don't use .moor files at all): @Index(name: "IX_Foo_IdAndName", fields: ["id", "name"]) @Index(name: "IX_Foo_OnlyName", fields: ["name"]) class Foo extends Table { ... } @UseMoor(tables: [Todos, Categories]) class MyDatabase extends _$MyDatabase { @Insert("INSERT INTO Foo(id, name) VALUES (:id, :name)") Future<int> anyName(String id, String name); // If bodyless are not allowed, an empty body will not hurt @Select("SELECT * FROM Foo WHERE id = :id") Future<Foo> getFooById(String id); // It will use `getSingle` if the return is not a list or get otherwise } Thanks for the ideas! To automatically generate relationship properties (list of bars in foo and an instance of foo in each bar). Doing this based on CREATE TABLE statements alone will probably result in inaccurate code. Just because there is some other table Bars referencing Foos, it doesn't necessarily mean that a list of Bar is a logical property of Foo. It would also break queries in moor files, since it's impossible to write a query having a List<Bar> in a column. allow we to apply one or more mixins: We already have #114 as an issue for this. Allow us to create queries and indexes outside .moor files (i.e.: don't use .moor files at all): If it helps, you can write queries in a @UseMoor and @UseDao annotation (under the queries parameter). The syntax you're suggesting (having an annotated abstract method) would be hard to implement now, since moor will generate a superclass for you. To support abstract methods, it would have to generate an implementation for the interface you provide. In general, I don't really see moor as an ORM. To me, it's more of a convenience framework around sql, so it inherits most of its concepts. I know there is interest in adding more ORM-like features, but those would also add a lot of complexity to the project. This includes automatic joins or subqueries (which we would need when generating a List<Bar>) or customizing the generated classes. In my experience, attempting to prescind a relational model rarely works and can easily add a lot of complexity. See also What ORMs have taught me: just learn SQL and Embracing SQL without abstraction. I'm not saying that those features don't add value, but I'm reluctant to work on them if they're very complex.
GITHUB_ARCHIVE
Modify volume mounts SELinux labels on the fly based on :Z or :z carry of #5910 closes #5910 @jfrazelle I updated my patch with some of the fixes required and also add the currently merged libcontainer patch so I could clean up my patch. Please pull that into your patch. ah will update thanks @rhatdan I will rebase this today, screwed myself over by merging the other volumes PR, facepalm I already re based, you might want to grab it. @jfrazelle need rebase yeah so I was going to rebase on top of michaels patch so save myself headache On Tue, May 5, 2015 at 3:28 PM, Alexander Morozov<EMAIL_ADDRESS>wrote: @jfrazelle https://github.com/jfrazelle need rebase — Reply to this email directly or view it on GitHub https://github.com/docker/docker/pull/12572#issuecomment-99250505. I have updated my original pull requests. Any word on this being merged. we are trying to get it in :) I will take a look On Tue, May 12, 2015 at 4:52 AM, Daniel J Walsh<EMAIL_ADDRESS>wrote: I have updated my original pull requests. Any word on this being merged. — Reply to this email directly or view it on GitHub https://github.com/docker/docker/pull/12572#issuecomment-101249865. Can you please sign your commits following these rules: https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work The easiest way to do this is to amend the last commit: $ git clone -b "selinux-labels-carry" git@github.com:jfrazelle/docker.git somewhere $ cd somewhere $ git rebase -i HEAD~2 editor opens change each 'pick' to 'edit' save the file and quit $ git commit --amend -s --no-edit $ git rebase --continue # and repeat the amend for each commit $ git push -f This will update the existing PR, so DO NOT open a new one. ok i updated the pr here with a few changes @rhatdan Sorry for the late question, but any reason not to always relabel when SELinux is enabled? Said otherwise: do we really need the :zZ syntax? (cc @crosbymichael) How would docker know whether or not to relabel? If I want a shared image or an private image, If SELinux is disabled in docker, if I am running a privileged container. If I volume mount in content that the container can read, no reason to change the label. For example -v /usr/bin:/usr/bin In certain cases we definitely do not want to relabel. For example mounting /home/dwalsh into a container could cause lots of problems. Alright, makes sense. LGTM typo fixed I realize this is probably out of scope for the release, but ideally wouldn't we save the existing selinux context and restore it when the bind mount/container is torn down? The problem right now is that if you use the :zZ option the relabel happens, the context changes and then the next time you go to mount that volume into the container it doesn't need the :zZ option and you may have ended up in the situation @rhatdan is describing. Mounting your home dir into a container shouldn't screw up the context permanently. If you want to restore the context to a safe context, it would probably be better to do a restorecon on the content rather then saving the labels, since there could be several labels that get changed. restorecon -R -v PATH sets the labels back to the system default. I could build a patch to do this but please do not hold up this patch set waiting for this. This patch set in one form or another is over a year old. One problem with this also is if you label the content with a z, it is for a shared container. It would be difficult to figure out whether all containers are no longer using the volume. I'm fairly certain we should be able to detect if the file/directory was still bind mounted and then not try to restore the selinux context if that was the case. updated Still LGTM! Ping @LK4D4. @jfrazelle Need rebase :) Will do tomorrow :) I assume because volume refactor ;) On Saturday, May 23, 2015, Alexander Morozov<EMAIL_ADDRESS>wrote: @jfrazelle https://github.com/jfrazelle Need rebase :) — Reply to this email directly or view it on GitHub https://github.com/docker/docker/pull/12572#issuecomment-104977254. basically had to rewrite a lot of it for the volumes refactor, so this needs a review, ping @rhatdan @LK4D4 @icecrime Reviewed: agree with all @cpuguy83 remarks (nice catches!), none to add. updated Janky is sad panda. I believe this is wrong. bind.RW = mode == "rw" Should probably be. bind.RW = rwModes(mode) @rhatdan good catch, updated hang on updating to lock the entire range... https://github.com/jfrazelle/docker/commit/4c7371667eeeda380c3f7fbc6d3cdcba2ad09564#commitcomment-11373232 Code LGTM. Ping @cpuguy83 for another review! updated.... and i like the log Ok, testing on CentOS I see there's a problem (and a regression from volume refactor). In this PR, labels are only set for when a driver is specified for the bind, so... docker run -v <name>:/path instead of for all binds (ie, docker run -v /test:/test). Also regression is that the vfs graphdriver was automatically setting labels so s0, the local driver does not and so volumes created by the local driver are not accessible at all. Code LGTM I think @cpuguy83 is working through a problem @rhatdan huh, thanks for the reminder! Can you please sign your commits following these rules: https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work The easiest way to do this is to amend the last commit: $ git clone -b "selinux-labels-carry" git@github.com:jfrazelle/docker.git somewhere $ cd somewhere $ git rebase -i HEAD~4 editor opens change each 'pick' to 'edit' save the file and quit $ git commit --amend -s --no-edit $ git rebase --continue # and repeat the amend for each commit $ git push -f This will update the existing PR, so DO NOT open a new one. @cpuguy83 it looks like there are a bunch of errors around volumes in experimental. I'm not completely sure why. updated w feedback from @rhatdan @cpuguy83 @calavera PTAL code LGTM, thanks @jfrazelle and @cpuguy83 ! omg im having deja vu Pushing this to docs review, please @rhatdan give it a final look if you have a chance! Ping @moxiegirl @SvenDowideit! this already had docs review just fyi on the old one Right, sorry @jfrazelle. Let's merge this :tada: Anyone have an example of what the "shared content label" looks like --- is it visible through a command? On 05/27/2015 06:06 PM, moxiegirl wrote: Anyone have an example of what the "shared content label" looks like --- is it visible through a command? — Reply to this email directly or view it on GitHub https://github.com/docker/docker/pull/12572#issuecomment-106092090. Files get labeled directly by the kernel, if you change the label, we refer to that to relabeling, or changing the label. Thanks @rhatdan one follow on question this statement "By default, volumes are not relabeled." Is it correct then that I have this going on conceptually Volume + label from host | ------------container-------- | |____content + label Yes by default no label changes happen. Only if the user specified z or Z. If they specify a container volume that gets created in vfs directory it will get created with the shared label, since it is assumed that container volumes will be shared between containers. So whats the syntax for multiple options? -v /mnt/uploads:/mnt/uploads:rw,z? -v /mnt/uploads:/mnt/uploads:rw:z? @derekstavis I believe it's the comma syntax. valid modes: https://github.com/docker/docker/blob/master/volume/volume.go#L33-L50 Oh, thanks @cpuguy83. I'm pretty new to Docker, this options aren't documented, are them in experimental? Looks like it's not working. Here comes some debug info: Host machine mount point: $ mount ... /dev/sdb1 on /run/storage type ext4 (rw,noexec,nosuid,nodev,mode=0755) ... Docker volume flag: -v /run/storage:/mnt/uploads:rw,z Container mount point: $ sudo docker exec 6ed0e172b7ac mount ... /dev/sdb1 on /mnt/uploads type ext4 (ro,relatime,errors=remount-ro,data=ordered) ... @derekstavis You are correct, it doesn't appear to be properly documented. I opened #15896 to track z is not a real mount flag, only a docker option. The way to test would be to do ls -z /run/storage. It would have a label attached to it which allows a containerized process to write to it. This is the output of ls -Z on both machines: On the host machine: $ ls -Z /run/storage ? lost+found ? test On the container: $ sudo docker exec 6ed0e172b7ac ls -Z /mnt/uploads ? lost+found ? test @derekstavis This would need to be done on the host size, not in the container. Can we move this to IRC? Happy to help there. Keep in mind every maintainer gets an email with every comment on GH. Yes, we can. I'm dereks. Sorry for the buzz. Is this CLI only? @chadfurman It's in the API. I would suggest that you modify these labels manually, though.
GITHUB_ARCHIVE
On September 23, 1997 at 22:03, "Marcos A. Souza" wrote: The MIME standards are the way they are for a very good reason. Until recently, most documents on the internet with the extension .doc were formatated ASCII text files. Different systems have different conventions about extensions and files names. The internet is about getting different systems to work together. The MIME header standard is well designed. It splits the responsibility of dealing with different sorts of files in a correct way. Thanxs for your explanation! I Know that the MIME header is the way. But It seems that Eudora, for example, have a special function to correctly parsing attachments. For example: I send a RTF file like text/plain and Note, the word "correctly" is not accurate. Eudora is relying on information that can be ignored by MIME MUAs (for good reason). Content-type is probably the most criticial part of MIME, and for Eudora to not use it properly (note, other MUAs are guilty also) is just plain negligence. mhonarc put him in the message like a text. Eudora makes a link to in the message. Why Eudora don't put it in the message too? Should be possible to identify possible wrong MIME types? The client might correctly tell me that a document is application/pdf, text/plain, aplication/... but if the client is wrong (and I know this because of the file extension - JPG is not a text/plain for example), is there a way to The MIME MUA is not "wrong". It should make its determination by the content-type setting. File extensions are not unique (which has been discussed before on this list and even comp.mail.mime). MHonArc does allow to use the filename specified in the message, but security notes are included in the documentation since using the filename can compromise your system. A negative that MHonArc suffers from what a regular MUA does not, is interactivity with the user. I.e. MHonArc runs in batch, so you (the user) do not make the decision for every attachment on how it should be extracted. The options you specify to MHonArc applies to all messages you process. So it is best to side with caution. If senders set the content-type fields correctly, everything will work as expected. Earl Hood | University of California: Irvine ehood(_at_)medusa(_dot_)acs(_dot_)uci(_dot_)edu | Electronic http://www.oac.uci.edu/indiv/ehood/ | Dabbler of SGML/WWW/Perl/MIME
OPCFW_CODE
Libraries & Presets A Library acts as a container for other presets, stored in the file format (.xrnl) which is easily installed via drag and drop. Once installed, a content pack will immediately make its presets available via the preset menu(s). As these presets can be many different “things”, the installer tries to be helpful and will tell you what was installed - whether that content pack was a bunch of new instruments, some multi-sample presets, or perhaps a collection of DSP chains. If you choose to save one of your own presets, it is stored in a special place - the User Library. This is essentially the same location as where content packs go, and this location is shared between Renoise and Redux - any file saved to the Renoise user library, or any installed content pack will also be available to Redux, and vice versa. A folder is created for your own presets and files. These files are kept in a separate location to avoid their accidental deletion and to provide easy access via Finder/File Explorer. - Windows: HOME/[My ]Documents/Renoise/User Library - OSX: HOME/Documents/Renoise/User Library - Linux: HOME/Renoise/User Library Specific Preset Types Instruments are constructed from a variety of parts: Phrases, Keyzones, Waveforms, Modulation and Effects. Specific preset types for each of these sections are available from a drop-down menu at the top right corner, where you can load, save, import and export. This allows an existing instrument to load the various presets into their specific sections without overwriting the whole instrument. A library can contain any of these file types. - Phrases (.xrnz) - A phrase preset is an XML file describing the number of lines, columns and other settings (loop, tempo etc.) that make up a single phrase. - Keyzones (.sfz) - A multi-sample preset is an .sfz file, an open standard format for describing musical instruments. - Waveform (.flac) - Waveform presets are just samples stored as .flac files. - Modulation (.xrno) - A Modulation Set preset is a collection of modulation envelopes that affect various sample domains (volume, panning, etc.). The preset itself is a simple XML file. - Effects (.xrnt) - An Effect Chain preset describes the effect devices and parameter values that make up an effect-chain. The preset itself is a simple XML file. A library can also contain two other file types: - Effect-Devices (.xrdp) - An Effect-Device preset defines the parameters of a single effect device. This can be any of the regular devices or a Doofer (a special combination of other devices). - Themes (.xrnc) - Themes are alternative visual styles for the interface and are stored as simple XML files. New themes can be created, or existing ones edited, in the Themes tab of the preferences menu. XRNL Library Creation The easiest way to build up a library is to save presets from inside Renoise, creating a library based on an existing user library. Build The Collection Libraries are laid out in the same way as the user library. For example, it might have the following structure: - - Samples - - Multi-Samples - - Instruments Note that these folders are created automatically, as you save a preset. Inside these folders, you can organize presets in folders too. This is supported and shown in the Renoise/Redux interface, but you can't reorganize, move files around from there - you need to open an explorer/finder window to organize your files. Any changes performed here should automatically be reflected in the Renoise user interface. So, after a bit of customization your file structure might now look like this (with folders expanded): - - Samples - - Ambience - - Channel - - Field Recordings - + Multi-Samples - + Instruments - - Synth - - Bass - + FX - - Scifi Add A Manifest Once you feel the content is ready to be exported, add a manifest to the root folder (use a basic text editor such as Notepad+ and copy the following text as a starting point). <?xml version="1.0" encoding="UTF-8"?> <Author>Username (plus email, link, whatever)</Author> <Description>Amazing pack by username for Renoise+Redux</Description> The only thing that is really important to get right is the name - it needs to follow a certain naming convention, in the form `abc.def.ghi`. For example `com.renoise.elements` for one of our Renoise-published libraries, but you can use whatever name you like as long as it has those three parts. By now, the file structure might look like this: - + Samples - + Multi-Samples - + Instruments You can even add additional files if you wish to - perhaps you have a PDF document, or a 'readme' of some kind? These files will not be useful to Renoise, but will still be installed on the user harddrive as part of the library. Creating The Library File Using a zip archive utility (on windows, 7zip is recommended), you first compress the inside of the root folder, and then assign it the name provided in the manifest plus the Renoise library file extension, `.xrnl`. So, in the case of our example pack, the file name would become `org.username.examplepack.xrnl` If you want, you can now install the library to check that everything has worked. To install in Renoise, drag the file on top of the Renoise window. To install in Redux, drag the file on top of the plugin window, or click the load button. A successful install should result in a message such as: - Library 'xxx' was successfully installed. - It contains Instrument presets. If the installer encountered a problem, hopefully you will get a useful error message that can help to track down the problem.
OPCFW_CODE
Introducing NGRX Actions 3.0 In NGRX Actions 3.0, we have some exciting new features. - Remove the need to call - Dependency Injection in our Stores - Addition of Effects in our Stores None of these changes are breaking so all the old APIs you were using still work. Lets take a minute to break down what changed and why. No need to call createReducer anymore createReducer method was a factory function that would change our ngrx-action classes into reducers that NGRX could handle. This made ngrx-actions super flexible and easy to add in to any ngrx project, however, because of AoT not allowing function invocations in module declarations, we had to write some verbose code to handle this. This looked like: One of the main reasons I wrote ngrx-actions was to reduce boiler plate and well this was boilerplate. So now with ngrx-actions 3.0 you can invoke the NgrxActionsModule with your classes directly removing the need for this. The above becomes this: Thats much nicer. Again this is not a breaking change though, so you can still do the old way. Dependency Injection in our Stores Oftentimes we write services to do things like calculate permissions and we make them as services but because NGRX stores are pure functions we have no way to actually use those services in our store. In Angular, we rely on DI quite a bit so why not leverage that capability in our stores too? Previous to 3.0, we had a createReducer function call I mentioned above that would create our reducer factory for us. In 3.0, we enhanced this to when you passed it to the NgrxActionsModule it would be able to resolve the class using the injector thus giving us the ability to use DI now. So lets take a look at how we do this: In this example, we did a few things to make it DI-able. First we added Injectable decorator to our store. Then in our module, we used our new pattern of initializing stores and also added PizzaStore as a provider so it can resolve our app depedencies. Presto! Effects in Stores As I mentioned above, one of the main reasons I created ngrx-actions was to reduce boilerplate of NGRX. Having separate files for all these different pieces (actions, selectors, reducers, effects) just felt daunting. So why not merge actions and effects into one logical place? Now, I can already hear some of you disagreeing with this change stating it blurs the separation of concerns but I disagree. In a event sourcing world, you are dispatching events and listening to those events and doing some action whether that is manipulating a state or dispatching a saga ( saga is commonly used in discussions of CQRS to refer to a piece of code that coordinates and routes messages between bounded contexts and aggregates ). On top of logical organization of these, I felt like NGRX effects are really dense , hard to read and difficult for beginners. Your effects often have lots of redundancies that could be simplified too. Lets take a look at an effect that I have today: This effect is pretty intense but what its doing is actually pretty simple: - Listen for Cancel Pizza Action - Map the payload which is the justification for canceling - Get the pizzas in the store - Map those to a new object which contains the justification and the pizzas from the store - Call the service to cancel the pizzas - Emit the cancel pizza event and show a notification In ngrx-actions 3.0 this could be reduced quite a bit: Isn’t that nice?! ngrx-actions 3.0 provides some quite powerful new features. All these features are optional, non-breaking and you can use as much or as little as you like. Hope you enjoy!
OPCFW_CODE
Zip Backup to CD screenshots To open an existing backup job select: files/open backup job. To create a new backup job mark all selected folders in the folder list and select: Backup/Remove folder. Now you are ready to add the folders to backup select: Backup/Add folder to backup. Continue with adding all the folders which you want to backup. Decide if all files in the selected folders shall be backed up or only the files which have been changed since the last backup where the archive bit were cleared shall be backed up. Check the field clear archive bit to get the archive bit of the backed up files cleared after they are backed up. This makes it possible afterwards to make backup of only the files changed since the last full backup. In addition to above selection you also have the option to backup files which have been changed after a specific date and time by checking the field Newer than and submitting a date and time. A special feature is the possibility to use automatically the date and time for the last execution of a backup job as the newer than date and time. The feature is enabled or disabled with the menu Backup/Use Last runtime as newer than. This feature is especially useful if you want to make backup copies in very short intervals to have the possibility of saveing the different stages of a project. The Zip file created will normally get a name in this format 000627100ak.zip according to the date and followed by a number starting with 100 for the first file created this day. After the number may follow an "a" which means that only files with the archive attributes set (files changed since last backup) have been backed up or a "c" which means that it is a copy and the archive attributes have not been cleared after the backup of the files. You can also specify a name which shall precede the creating number. Now it should be decided where the backup files shall be stored. It is recommended to store the file on a fixed disk drive before copying it to a CD-R. The Zip file size can be specified by selecting one of the fixed sizes or by specifying your own size. When the Zip file has achieved the selected size, the program will continue to create the next file if there is enough free space on the destination drive. If there is not enough free space, the program will stop the zipping process and allow the user to insert a new removable media or copy the created file to a CD-R and thereafter delete the file to make free space. Now the user shall press OK and the zipping process will continue to create the next Zip file. The compression factor can be selected by the user. Selecting compression factor over middle will slow down the zipping process and should only be used if the Zip file size is of great importance. To start the restoring process of one or more files start with opening the Zip file containing the files by selecting the menu Restore/Open file to restore from. You may also use any other Zip program to unzip the files from the Zip file. Select the folder, drive or network share where the files shall be restored to. Check the field Add stored path to destination path to get the restored files placed in the same folders as the original one, from which it was backed up. Unchecking this field will restore all files in the destination path regardless of their original path. Decide if the files restored shall overwrite existing files and when the files shall be overwritten. Mark the files which shall be restored and press start to begin the restoring. It is possible to change the sort order of the file list by double clicking the column header. The log shows information on backup jobs processed. To read in the last information press the Update button. The information is stored in a standard text file ZipBackup.Log. Here it is possible to schedule the opened backup to run at different times. To schedule another job open the job as described under backup. The job scheduled will run monthly on the first occurrence of the days checked, e.g. First Monday in the month or the first day in the month if a date is specified. The job scheduled will run weekly on the first occurrence of the days checked. The job scheduled will run daily. The job will only run once. When the date and time for the job to schedule has been selected, press the Add button to add the job to the schedule Remove a job by selecting the job in the lower list and press the Remove button. The upper list contains the job with name and time to be run next. The lower list contains the schedules as added. Zip Backup to CD can be started automatically every time when the computer is started by checking the menu Files/Start
OPCFW_CODE
If spoilers are released in flight without descending, what could happen? If spoilers are released in flight without descending, what could happen? Welcome to Aviation.SE! Can you add some more details about the situation you are asking about? "what could happen?" is very broad... Spoilers are also called speedbrakes. If used in level flight, the aircraft will slow down. They are usually used to slow down and descend, but they can also be used to slow down in level flight to meet a speed restriction. On large jetliners, spoilers can also used for roll control, so it is quite common to see them deployed in all phases of flight. Well that depends on what else you do. Spoilers will increase drag and destroy lift. So if you are on autopilot and do not increase power(thrust), the A/P would increase AOA to increase lift sufficiently to maintain level flight, (further increasing drag), and the aircraft would start slowing down. If you were flying manually, you would probably manually increase back pressure to increase AOA and lift to maintain level flight and increase power to maintain airspeed. This would be necessary to prevent a descent. If you didn't take those actions the nose would drop and the aircraft would descend at an increasing speed and descent rate until your speed increased sufficiently to generate enough lift to maintain a stable descent. In some sailplanes at least, spoilers do not cause the nose to drop and airspeed to increase. In fact on those sailplanes they can be used in an emergency to descend through clouds with no horizon reference because the craft will enter stable ( in both roll and pitch) descending flight. I'm not a sailplane person, but I'm curious. How does it descend without the nose dropping? And also since lift is at least initially,, decreased, won't the airspeed increase due to altitude loss to stop the glide angle from just continuously dropping? @CharlesBretana -- well, if spoilers cause the glider to trim to a higher angle-of-attack, but also a higher sink rate-- there you go. However if there is a substantial increase in sink rate I have a hard time believing that the nose is not going to end up at a somewhat lower pitch attitude, due to the changed direction of the flight path. I'd want to see video evidence (or see it first-hand) before accepting idea that spoilers might not cause nose to drop at all. Of course the whole question of how the airspeed responds depends on starting conditions. If in an extremely steep dive (very low a-o-a), drag from spoilers may be more important than loss of lift, causing airspeed to decrease. That's the whole point of "terminal velocity" dive brakes, which some sailplanes do have-- you can point the nose straight down (e.g. accidentally, in cloud) and not blow through redline. @quietFlyer, yes, I guess every scenario has a spectrum of actually possible conditions under which it can occur. I'm (again), not a sailplane pilot, so I don't have any expertise on their behavior... But I agree with your skepticism. On your point about airspeed, yes, remember the Stukas in WW2! As dive angle increases, it takes less and less increased lift to maintain (or stabilize) that increased dive angle without increasing it further. The lift required to stabilize flight at any specific dive angle is the aircraft weight times the cosine of the dive angle. And the cosine function does not start to decline significantly until about 25 - 30 degrees. "Without descending" means that the aircraft cannot land. Therefore it will run out of fuel. In an attempt to avoid descending, it will slow and eventually stall. At this point, the conditions of the question, no descent, are violated. As asked, the question is unanswerable. Hah! What if there's higher terrain ahead? (Grin) @CharlesBretana Ohhh... interesting consideration! The homework problem didn't specify not climbing, so maybe that could be allowed. Sorry, just injecting a bit of humor there!
STACK_EXCHANGE
Why am I measuring such high THD+N on my active high-pass filter? I have designed a 2nd-order active Sallen-Key high-pass filter for use in an audio circuit. An LTspice schematic is shown below, where VDD is 5 V. I have also breadboarded the circuit using low quality components (X5R caps and 1/4 W, 5% resistors). Using an Audio Precision audio analyzer, I drove the breadboarded circuit with a 1 Vrms differential signal swept from 20 Hz to 20 kHz and measured the following response: The sweep looks good in my opinion and the -6 dB cutoff point is at 3.16 kHz (very close to expected 3.18 kHz). I measured the THD+N via the Audio Precision using a 1 Vrms signal and a 200 kΩ measurement load. Below about 12 kHz, the THD+N is pretty bad (0.5% - 0.7%). Above 12 kHz, the THD+N is very close to the datasheet of the TL084 opamp being used (THD+N ~= 0.0006%). I suspect that the higher value tolerance of the resistors and low-quality dielectric material of the capacitors may be to blame here, but does anything else in the circuit diagram / design seem like a likely cause for undesirably high THD+N? Is it even necessary for me to worry about the THD+N of this circuit? I've only ever measured THD+N on full-bandwidth amplifiers before, not filters. Try larger C5.. 12 nV / rt(Hz) @ 1kHz what do you see? Your highpass filters pass high frequency distortion and noise but reduce the fundamental frequency. Your 5V supply voltage is much less than the 10V recommended on the datasheet. Measure THD vs Vout @ 10kHz You made be seeing asymmetry or slew rate limiting at large outputs causing THD, so report exact test results and settings and load capacitance The number one suspects are the ceramic capacitors. Here is a SE link that shows the severe voltage dependence that ceramics have. The X5Rs are some of the worst, especially in the small SMD sizes. For any analog signals audio or otherwise, use NPO (Thanks Spehro Pefhany) ceramic or film capacitors instead. Ceramics are fine for bypass and decoupling. Just be sure to increase the value to compensate for voltage dependence. Manufactures don't all specify so you may have to measure voltage dependence. The common VCC/2 bias will cross-couple at low frequencies. May not cause THD but interference may increase. Better to buffer the voltage divider (VD) with a unity gain amp or have separate VD for each stage. @Audioguru makes a good point. You need to keep the signal from VCC and GND for good THD, even with rail-to-rail amps. Where you work with single supply, there are lots of modern choices better than the TLO8x. Whether breadboarding or production: Always put a decoupling capacitor from VDD to GND as close to the chip as possible. A quad amp needs a bigger cap than duals or singles. Always tie off unused amplifiers to prevent coupling through the supply rails. +1 NP0 50V 1nF ceramic caps are easily available in sizes as small as 0603, and they have almost no voltage coefficient. The bypass caps should not matter for voltage coefficient, but I also don't like the coupling. Thanks @SpehroPefhany. Forgot the NPO. And yes the bypass caps are fine. I'll fix Did you try 10V for the supply instead of the 5V that barely works with an old TL084 quad opamp? Are the other three opamps in the package properly deactivated? Thanks @Audioguru: Very important comment "The X5Rs are some of the worst" The higher-K stuff is even worse, like Z5V. Those are atrociously bad if they see almost any non-zero voltage across them. If you can keep the capacitor voltage down to a few mV, then they are not super-bad, not great either. They make decent varicaps, and are available in a huge range of values.
STACK_EXCHANGE
Why is a TypeBuilder generated generic methodinfo not a generic method? I have some code that uses a MethodInfo of a generic method found on a generated type. To avoid some reflection, I have the code use the ldtoken Method ldtoken Type call GetMethodFromHandle(RuntimeMethodHandle,RunTimeTypeHandle) Pattern to generate the MethodInfos at compile time. However, if the methodInfo belongs to a generic type and itself is a generic method things get screwy. Here is some code that simply generates a GM that emits an open version of its methodInfo. If I call it to retrieve the method than try to close it over a specific type I get a perplexing exception:: System.Reflection.MethodInfo GM[M]() is not a GenericMethodDefinition. MakeGenericMethod may only be called on a method for which MethodBase.IsGenericMethodDefinition is true. Here is the relevant code:: var aBuilder = AppDomain.CurrentDomain.DefineDynamicAssembly(new AssemblyName("Test"), AssemblyBuilderAccess.RunAndSave); var mBuilder = aBuilder.DefineDynamicModule(aBuilder.GetName().Name, true); var typeBuilder = mBuilder.DefineType("NameSpace.Generic`1",TypeAttributes.AutoClass | TypeAttributes.Sealed | TypeAttributes.Public,typeof(object)); var TypeGenerics = typeBuilder.DefineGenericParameters(new[] { "T" }); var methodBuilder = typeBuilder.DefineMethod("GM", MethodAttributes.Public | MethodAttributes.Static | MethodAttributes.HideBySig); var methodGenerics = methodBuilder.DefineGenericParameters(new[] { "M" }); methodBuilder.SetSignature(typeof(MethodInfo), null, null, Type.EmptyTypes, null, null); var ilgenerator = methodBuilder.GetILGenerator(); var typeBuilderClosedOverT = typeBuilder.MakeGenericType(TypeGenerics); ilgenerator.Emit(OpCodes.Ldtoken, methodBuilder); ilgenerator.Emit(OpCodes.Ldtoken, typeBuilderClosedOverT); ilgenerator.Emit(OpCodes.Call, typeof(MethodBase).GetMethod( "GetMethodFromHandle", BindingFlags.Public | BindingFlags.Static, null, new[] { typeof(RuntimeMethodHandle), typeof(RuntimeTypeHandle) }, null ) ); ilgenerator.Emit(OpCodes.Castclass,typeof(MethodInfo)); ilgenerator.Emit(OpCodes.Ret); var bakedType = typeBuilder.CreateType(); var methodInfo = bakedType.MakeGenericType(typeof(int)).GetMethod("GM").MakeGenericMethod(typeof(bool)).Invoke(null, null) as MethodInfo; var methodInfoClosedOverBool = methodInfo.MakeGenericMethod(typeof(bool)); It seems the only time my code screws up is if it's a genericmethod on a non-generic type. If the code is rewritten so that its about a normal method on a normal type, or a generic method on a normal type, or a normal method on a generic type it all works. It's only the combination of both that causes errors. Am I doing something wrong? I submitted a bug about this issue: https://connect.microsoft.com/VisualStudio/feedback/details/775989/clr-cannot-emit-a-token-for-an-open-generic-method-on-a-generic-type That's interesting, it looks like methodInfo.IsGenericMethodDefinition is indeed false, although methodInfo.GetGenericArguments() returns a type that is generic parameter. Yeah inspecting it certainly makes it look like the damn thing is generic. My current solution is to make sure my methods have unique names and passing in the Type and a string, and calling type.GetMethod(...), but I'd like to avoid the reflection. Seems that it is now resolved in .net 4.7 Looks like a CLR issue to me, because the same thing happens if you write the IL by hand and use ilasm. That is, given a generic class G and a nongeneric class N, each with a generic method M, then trying to get the generic method definition from the non-generic class works: ldtoken method void class N::M<[1]>() ldtoken class N<!T> call class [mscorlib]System.Reflection.MethodBase [mscorlib] System.Reflection.MethodBase::GetMethodFromHandle( valuetype [mscorlib]System.RuntimeMethodHandle, valuetype [mscorlib]System.RuntimeTypeHandle) castclass [mscorlib]System.Reflection.MethodInfo ret but the MethodInfo returned from the generic class is not a generic method definition (but it almost is; it's D.MakeGenericMethod(D.GetGenericArguments()) where D is the method definition you want): ldtoken method void class G`1<!T>::M<[1]>() ldtoken class G`1<!T> call class [mscorlib]System.Reflection.MethodBase [mscorlib] System.Reflection.MethodBase::GetMethodFromHandle( valuetype [mscorlib]System.RuntimeMethodHandle, valuetype [mscorlib]System.RuntimeTypeHandle) castclass [mscorlib]System.Reflection.MethodInfo ret Well thanks I actually just did the same thing with concrete types (List.ConvertAll) and I'm going to report this as a bug to microsoft. I doubt much will happen :( What's interesting is that this IL cannot be compiled back: "error : syntax error at token '['" The problem lies within the ldtoken method instruction because, due to the inability of IL to express generic method definitions, the CLR loads the wrong method. The instruction is decompiled by ildasm to this: ldtoken method class [mscorlib]System.Reflection.MethodInfo class NameSpace.Generic`1<!T>::GM<[1]>() Which isn't even valid IL. The CLR then messes up the instruction and instead loads a generic method instantiation from it's own generic parameters. var methodInfoClosedOverBool = (methodInfo.IsGenericMethodDefinition ? methodInfo : methodInfo.GetGenericMethodDefinition()).MakeGenericMethod(typeof(bool)); For more tests, I've made a shorter code showing the same issue: DynamicMethod dyn = new DynamicMethod("", typeof(RuntimeMethodHandle), null); var il = dyn.GetILGenerator(); il.Emit(OpCodes.Ldtoken, typeof(GenClass<string>).GetMethod("GenMethod")); il.Emit(OpCodes.Ret); var handle = (RuntimeMethodHandle)dyn.Invoke(null, null); var m = MethodBase.GetMethodFromHandle(handle, typeof(GenClass<int>).TypeHandle); GetMethodFromHandle (which should also only require the method handle, not the declaring type) just sets the declaring type (notice <int> or <string> doesn't matter) and doesn't do anything wrong.
STACK_EXCHANGE
Hello everyone , Welcome back in new this tutorial of Git & Github Tutorial series. In last tutorial we had learnt about the Git Features and Working mechanism. In this tutorial we are going to learn the basic commands of Git which generally used in Git. Basic commands and operations of Git— Here is the list of basic operations of Git and basic commands according to operations. - To change directory– It is mostly used operations occurred all the times when we start our Git shell . By default when we start our Git shell , it will be in home directory .To change directory , we use cd command which stands for Change Directory. Ex… If you want to change directory to local disk C , give this command cd /C To move in any folder like git folder , use this command cd git/ - To list files– Sometimes we need to list the all file of any folder . So , to do this use the command ls . It lists the all files of that folder in which you are currently . - To configure your Github user name and email-id — To start te communication between your Github account and Git , you need to configure your Github user name and email-id as follows… - To clone repository– It is a general operation in Git which is performed usually. To clone the repository first of all you need to go your Github account and then that repository which you want to clone and then go to Clone and Download option where you will find a URL , copy that URL and go to your Git bash and write this command– git clone “paste the copied URL” and then paste enter. Now your repository is cloned. - To add file to Repository– If you want to add any file to your any particular repository , go to your repository and to add , write this command– git add “file name” - To check the status of files of any repository– To check the status of files , first of all go to that repository and write this command— git status and press enter . It will show you the list of all files present in that repository . - To commit repository– Before transfering any file to your github account , you need to commit those files and then upload them . To commit all those files , give this command— git commit -m “give any committed message” “filename” - To transfer file to Github– To transfer the file to your Github , you need to give this command– git push -u origin master - To create a branch– If you want to create a new branch using Git bash , give this command– git checkout -b [name_of_your_new_branch] - To change working branch– If you want to change working branch , simply give this command– git checkout [name_of_your_new_branch] - To push branch on Github — After creating a new branch , you need to push this branch on Github , so give this simple command for push branch to Github — git push origin [name_of_your_new_branch] » To learn the Git installation and configuration process , click on below link– » To know the important features of Git & Working mechanism , click on below link– » To get the brief introduction about Git & Github , click on below link– » To learn about the basic components of Github , click on below link– » Learn to create Github pages , click on below link— So , these are the important basic commands of Git . I hope, your all concepts will have been cleared . If you have any query regarding ti this tutorial , please ask in comment box . Your question will be appreciated . Thank you !
OPCFW_CODE
How do cement boards and Kerdi boards for wall tile compare? I came across an interesting product the other day, and thought it might be perfect for an upcoming project. I'm redoing the tile back splash in my kitchen and was planning on using cement board behind the tile, but now I'm not so sure. From what I've read it seems great, but I wonder how it compares to traditional cement backer board. So has anybody used KIRDI-BOARD? What advantages/disadvantages does it have? Unless you've got a lot of intricate cuts (tough with concrete board) or expect to have a LOT of water on your backsplash on a regular basis, it's probably just simpler to go with backerboard. I have used a Kerdi-board competitor, Wedi, for a shower and found it very easy to work with. Easy to cut and seal. Comes with sealant (caulk tubes) and fastening bolts (might be tricky to drive in small backsplash places) To avoid flex, you want to use a thicker board. It's pretty strong stuff (particularly for a kitchen backsplash, which shouldn't bear much (any?) weight or pressure. I have also used the original Kerdi membrane (not the newer board you mention) for a shower. I have a lot of confidence in the quality of the seal with the Kerdi membrane, but it's quite a bit of extra work if you haven't done it before, and is probably overkill for a backsplash (I assume the counter top runs below). As for cost - my personal opinion is that small jobs are precisely the place to spend the extra money (assuming the quality is better for what you need). If I had to do 100 bathrooms, the added cost of the these newer products would really add up. If I'm doing 1 or 2, then the difference is relatively small. As a bit of history: The Kerdi membrane (from Schluter Systems) is one of (the?) original products in this space. Wedi board came as a competitive product, eliminating the need to put up drywall and then apply the membrane. Schluter responded with the Kerdi board. I haven't used KIRDI-BOARD itself, but I have used similar products. Mostly, the biggest advantages are found in weight and longevity of the material. The synthetic foam will last longer than traditional concrete and produce significantly less dust when cutting and installing. The biggest disadvantage will be the cost. Synthetics (KIRDI-BOARD and similar products) tend to run on the expensive side. Some quick Google-ing turned up about $10/sheet for 5'x3'x1/2" cement boards. KIRDI-BOARDs in a similar dimension (4'x5'x1/2") would run about $77. So ... if you're working on a DIY project, stick with the less expensive, traditional building materials. If you can afford to buy massive quantities in bulk (or know a contractor willing to do it for you), look into the more advanced stuff. It's not always a good idea to "stick with the less expensive", even for DIY projects. Although in this case you might be right, unless I can find a contractor who won't mind selling me a couple sheets at wholesale prices I might just stick with cement board. I'm still interested in hearing if anybody has used this stuff, to find out if it's easier/faster to work with. Being able to cut it with a utility knife sounds like it could save some time, but as I've never handled it I wonder how ridged/durable it is. @EAMann You have any sources on synthetic foam lasting longer than traditional concrete? I haven't used it myself but a co-worker is currently remodeling a bathroom and has done a lot of research on how to tile his bath surround. His impressions of Kerdi-board: it's slightly flexible, so tiles may work loose over time if it's not reinforced properly. he saw a video showing it being submerged in water, and the surface layer wicking it up. That shouldn't be a real-life problem unless you do something really wrong with the grouting and caulking. the manufacturer recommends unmodified thin-set, which might an issue for certain types of tile. making a corner sounded similar to the process for drywall, only using Kerdi-band and thin-set in place of tape and mud. what tile types were you thinking would be a problem for unmodified? @HerrBag: From their website, Exceptions: Certain moisture-sensitive stones, e.g., green marble, or resin-backed tiles may not be appropriate for use in wet areas or may require special setting materials. I believe using 1/2" Kerdi-Board is much better than Kerdi Membrane over 1/2" Drywall Its cleaner (no gypsum dust etc.) It’s probably more water proof (moisture will not deteriorate Kerdi-Board) Its only about 30% more in cost (about $440 for at large 4 foot time 8 foot shower or 96 sq. ft. of wall space, which is a big shower) 1/2" inch Kerdi board is $3.60 sq. ft. assume about another $1.00 for Kerdi Band or 96 sq. ft. x $4.60 = $441 Using wallboard 3 x $10 = $30 and Kerdi membrane $2.10 sq. ft. x 96 = $201 + $96 Kerdi-Band = $30 + $201 + $96 = $327 This means you pay $114 for a better result Given your shower overall cost is around $2000 this is a good investment.. I ended up tackling my basement bathroom renovation by myself. I started out with cement board and a close cousin to it.Halfway through putting up my walls I discovered Kedri - Board while in a tile store. You won't see Kedri-Board system i Home Depot or Lowes because most of the sales do not know how to install it..But the big upside to the Kedri-Board system is the weight. I can lift 2 of these boards where I was struggling with one cement board. I pretty much left up my moisture barrier but took down the half way job of cement board. Its about 9 months since the completion of my bathroom. I can't wait to start another project and to work on my learning curve. It's like getting a new toy. That light.
STACK_EXCHANGE
Does VLC media player store the files or its history in a hidden location? Does VLC media player store the played files hidden somewhere? I share my computer with a fellow student and do not want him to see what I have been watching. Is the history or files of what's played in VLC stored or logged anywhere on the computer? I want to know if there is a hidden file somewhere that shows what I have been watching through the media player, and vice versa, show what my roommate may have been watching. Why aren't you using multiple accounts on the computer so you don't have to worry about information being shared? That's kind of the purpose behind having accounts. Use a portable VNC/browser from your flash drive? @Zoredache, That wouldn't work. The exe may be in that portable folder but it writes to the local file. @Bart Silverstrim Because the "fellow student" is really his wife haha On a Linux system, there is a file $HOME/.config/vlc/vlc-qt-interface.conf which contains the entry named [RecentsMRL], which contains the recent history. On Windows (7) the Recent Media list is stored in the %appdata%\vlc\vlc-qt-interface.ini file. Open it and look for a line that says [RecentsMRL]. You should see the list below it But, strangely enough this file is written everytime you open vlc. Effectively, the recent media is written somewhere else and is loaded when vlc is opened. Windows 11 also stores in the same place... In VLC 2.1.4 you will have to access the advanced settings. VLC menu Preferences... Show all (button, bottom left corner) ► Interface ► Main interfaces      macosx ☐ Keep recent items On Mac you may also need to manually delete, "/Users/yourusername/Library/Preferences/org.videolan.vlc.plist". Adding a note: If you do that, all the settings will be lost; make sure you uncheck that box after you delete the file, if you find that necessary. There is a "recently used" list that is saved by VLC. And there are two answers for your question. The "easy" way you are probably expecting: You can disable the behavior by opening the Tools -> Settings dialog, selecting the "Interface" section (would usually be preselected) and deactivating the "Save recently played items" option. The "hard" way you should consider for your own good: simply set up a different non-admin account for your roommate on this computer, set up a complex password and never, under any circumstances, give it away to anyone else. Better yet, use EFS to encrypt all data in your profile directory - just in case somebody manages to get an administrative account on this machine. I'm using v2.2.4 on Win 10, and my "Save recently played items" option has been unchecked, yet I still always see a list of recently played items when I right-click "VLC media player" in the Windows taskbar on the bottom of the screen. So it seems like that option is broken. Ohh, it's because Win 10 has a separate "jumplist" feature that needed to be disabled: https://topbullets.com/2013/11/04/how-to-disable-vlc-recent-played-history-on-dock-taskbar/ All applications that store a recently used list do so within an .automaticDestinations-ms file: %AppData%\Microsoft\Windows\Recent\AutomaticDestinations\<name>.automaticDestinations-ms To disable VLC's/other application's recently used list/jumplists: Navigate to: %AppData%\Microsoft\Windows\Recent\AutomaticDestinations (View: detailed | Sort: ascending date) Open a file in VLC/other application (Note what .automaticDestinations-ms file moves to the top of the list as currently modified) Close VLC/other application → Open the .automaticDestinations-ms file in a text editor Select all: Ctrl+A → Del → Save changes: Ctrl+S Make the .automaticDestinations-ms file Read-Only: Cmd /c Attrib +R "%AppData%\Microsoft\Windows\Recent\AutomaticDestinations\<name>.automaticDestinations-ms" Verify correct .automaticDestinations-ms file was modified (Open a file in VLC/other application and it will remain 0KB) PS $ Cmd /c Dir "%AppData%\Microsoft\Windows\Recent\AutomaticDestinations\9fda41b86ddcf1db.automaticDestinations-ms" Volume in drive C is System Volume Serial Number is xxxx-xxxx Directory of C:\Users\JW0914\AppData\Roaming\Microsoft\Windows\Recent\AutomaticDestinations 2018.02.07 08:50 0 9fda41b86ddcf1db.automaticDestinations-ms 1 File(s) 0 bytes When on Windows, making the file %appdata%\vlc\vlc-qt-interface.ini read-only [right-click - Properties -> Attributes] will prevent to show any previously opened media files. Update: Clear the played list within the app before making the file read-only.
STACK_EXCHANGE
Sun/Microsoft Lawsuit: It Doesn’t Matter Ever since Microsoft licensed Java from Sun, we've heard lots of debate about how complete Microsoft's support would be. The strongest expression of that debate has been Sun's lawsuit against Microsoft, and plenty of people believe that the outcome of this suit will be significant. The lawsuit has come to the conclusion of a preliminary stage, but what ultimately may happen in court very likely won’t matter at all. To see why, it's important first to distinguish between Java the language and Java the OS-neutral programming environment. Microsoft has pretty good support for Java the language, and although the lawsuit might conceivably force Microsoft to stop calling its product "Java," it's hard to imagine the company would drop support for Visual J++. No vendor that wants to remain credible can afford a move like this. Besides, Java is a great language for writing Windows and Windows NT applications, if only for the natural way it integrates with COM. Finally, Microsoft has sold lots of copies of Visual J++, and it would be unlike it to walk away from a profitable product. Microsoft will never fully support the Java environment. Among the issues raised in the lawsuit are Microsoft's limited support for Java's Remote Method Invocation (RMI) technology and other aspects of the Java environment that aren't strictly part of the language itself. It is plausible that the courts can force Microsoft to completely support these relatively basic features. Since the day Microsoft signed the Java contract, however, Sun and its partners have greatly expanded the technologies that make up the Java environment. In particular, they've defined the Java Platform for the Enterprise (JPE). JPE includes a number of Java-based interfaces, each of which can potentially be implemented on any operating system, and each of which has a corresponding native API in Windows NT. For example, Enterprise JavaBeans is JPE's analog to MTS; the Java Naming and Directory Interface (JNDI) is like Microsoft's ADSI; and Java Data Base Connectivity (JDBC) is much like Microsoft's ODBC. Given that the purpose of JPE is to hide the APIs of any particular OS, what possible reason could there be for Microsoft to support these emerging Java standards? The JPE interfaces are perhaps today's major competitor to NT -- and eventually to Windows 2000 -- and it's seldom in a vendor's interest to support competing technologies. The Java hopeful might argue that by signing the original Java contract with Sun, Microsoft committed itself to supporting JPE. But it's just not plausible to believe that Microsoft would sign a contract that commits it to supporting whatever Sun defines Java to be, now or in the future. Even if a court someday decides that Microsoft is obligated to support all of JPE -- a far-fetched outcome -- exactly how does one force a company to provide good support? In the unlikely event of this occurring, it's a certainty that Microsoft would support their own proprietary interfaces first and best, relegating JPE to second-class status. Regardless of the lawsuit's ultimate outcome, here's some advice for organizations interested in using Java to build applications on Windows NT. First, if you're happy building apps that run only on Microsoft systems, but you'd like to use the Java programming language to build those applications, go ahead and use Visual J++. It's a good product, and it gets better and better as a tool for working in the Microsoft world. Having said this, though, I'd still argue that Visual Basic is a safer choice for NT enterprise development. VB is Microsoft's flagship language product, and so new innovations always seem to get the best support earliest in VB. If you want to build scalable, enterprise-class Java applications that are not Microsoft-centric -- apps that aspire to run unchanged on any operating system -- expect to use non-Microsoft products. In particular, you'll want a Java compiler that's not as Windows-focused as Visual J++, and you'll also likely use a Java application server. Most Java app servers don't yet support a large fraction of the JPE interfaces, but they will. More important, virtually all of them target NT as a primary platform, which means they're tuned and tested to work well in the Microsoft environment. Like many multivendor technologies, the JPE interfaces aren't as completely standard as users might like. Choose a Java application server carefully. Despite the vendor's claims of portability, it probably won't be trivial to move your application to a competing product. Still, for building reasonably portable, scalable, enterprise-worthy Java applications, a Java app server is probably the way to go. Just don't expect Microsoft to provide one. Whatever results from their legal battle with Sun, a whole-hearted Microsoft embrace of the Java Platform for the Enterprise isn't in the cards. Instead, look for Microsoft to provide good support for building Windows and Windows NT applications in Java, and as little support as possible for running those apps on other systems. --David Chappell is principal of Chappell & Associates (Minneapolis), an education and consulting firm. Contact him at firstname.lastname@example.org.
OPCFW_CODE
|I have nothing but love in my heart :) ||[Jan. 19th, 2006|12:01 pm] I must clarify!Something I said about hating socialists appears to have been misinterpreted. | I live in a socialist state, and I can't say I hate it. I lived in India all my life until last year. So how do you know how you'd feel in a socialist state? OK I don't know why you want to pretend that India is not socialist, and I don't feel the need to prove statements like fire is hot or India is socialist, and besides I'm not in the mood for an infinitely deep thread. Seems we have quite different definitions of socialist, but that's to be expected. While India certainly has some socialist traits (but then, so does the US), it hardly qualifies as a majorly socialist state - although it most certainly did right after its independence. I see. I thought you were just being deliberately argumentative. Well yeah, my definition of socialism definitely includes India. I should also point out that India calls itself socialist - the word is plastered all over the constitution. At one point the government took over all the banks. It owns most of the heavy industry and manufacturing industries. There are two states that regularly have Marxist parties in power. Until the 1990s we had what was called the "license permit raj" - if you owned a business you had to get the government's permission to take a leak. People over 40 have horror stories of how they had to wait for months to get a permit to ... wait for it ... purchase a refrigerator for their home. Laborers' unions had great power. There's been a lot of change in the last decade but the country is still fundamentally socialist. Cabs ("autos") have their fare controlled by the government. (At least that's what the government tries to do; the reality is a huge mess.) Free enterprise is stymied at every turn. I consider America to have some socialist traits too (I am against social security) but that is nothing, nothing compared to India. I'd argue those are totalitarian traits, or possibly classical political communism. I'm aware India's constitution says "socialist" but that appears to be quite a misnomer. Heavy bureaucracy is not a necessary trait for a socialist state (although it does appear rather common), and the immense level of bureaucracy you mention here strikes me as communistic (in the practical political sense, not ideologically). In sum, I would argue the traits you are listing appear to me communistic, not socialistic. And I admit that living in a communist state holds very little appeal to me. But, the basics of a proper socialist state; an unemployment safety net, good education to all citizens, ubiqutous healthcare and best of all - no poverty, are things I find it hard to live without. Of course, "proper" in this case includes "sufficiently industrialized", which is strictly unrelated to socialism per se, although I hold it as an inevitable result of socialist - as opposed to communist - policy. There are two states that regularly have Marxist parties in power. Three- if you include Tripura ( if you consider it a state, that is!)- jus' for the info! I hate only the people in power, because they force their views on me. Well, if what you have in your heart is love, you wouldn't even hate those people in power. :-) The most hilarious I've heard this year! Probably you're hearing this word for the fifth I have spent most of my time in Kerala, a place with an unreasonably strong communist bias. Just to give an example, it is close to impossible to move thing Foo from place Bar1 to Bar2 without filling the wallets of the local commie "coolie-gang". I am _forced to_ call a coolie when i buy a tv, a fridge. Hell, to think that all this terrorism is state-sponsored. Come to Kerala, invest your money, start an industry and try to keep it running for a year. You will know. The commie bastards will chase you out, of course after stripping you of your dignity and your last pair of underwear. Communism as an ideal may be reasonable, but in practice, it mostly sucks. PS: Arvind, Good to see you@LJ.
OPCFW_CODE
Did Menachem Begin say "I am a former terrorist"? There are a lot information about Menachem Begin on Wikipedia page but when I change the language of same page of Wikipedia into Persian there is a line that has not been mentioned in English version. Here is its English translation: This group planted a bomb in the King David Hotel in 1947, which killed 91 people. He began one of his speeches in New York with the words, "I am a former terrorist!" And it cited the Persian translation of the book Interview with History by Oriana Fallachi. But according to Wikipedia, there is no name of Menachem Begin in the list of Oriana Fallaci's interviews. I didn't even find anything in the archived book by searching for "terrorist". Did Menachem Begin say "I am a former terrorist"? It is a matter of historical fact that Begin was a terrorist in the period before the UK relinquished their mandate over Palestine. Whether he claimed this in public statements is interesting, though possibly less significant. Many leaders–some with positive historic legacies–have been terrorists and admitted it. My favourite being that the first Deputy First Minister in Northern Ireland after the Good Friday agreement admitted he used to be an IRA commander. @matt_black if you mean Martin McGuinness it's a matter of historical record that he was an IRA commander, and was jailed for a terrorism offence. By 'favourite' do you mean favourite terrorist elected to a parliament, or your favourite example of such? @WeatherVane it's my favourite example of a former terrorist whose political career ended with a generally positive legacy. @matt_black Jean Moulin or Nelson Mandela had a similar fate. @Evargalo Nelson Mandela was not a terrorist. US and UK put any person or group that threatens their colonialism (instead of those who threatens humanity) on their list of terrorists. For historians "terrorism" is a mode of action, not a moral judgement. It is a fact that Mandela acted as a terrorist in the 60s and the 80s (https://en.wikipedia.org/wiki/UMkhonto_we_Sizwe#Domestic_campaign), independantly of whichever list published by any country, and independantly of the fact that the regime he was combatting was racist and criminal. After @Laurel's answer I googled for "Oriana Fallacci interview with Ariel Sharon" and just found out that there is a new edition of Oriana Fallacci's book featuring Ariel Sharon and some other world powers. The following text is the part of the interview that the question is about: Oriana Fallaci: The fact is that you are using that word "terrorist" as an insult, and rightly so. But what were you when you were fighting the Arabs and the English to found Israel? Irgun, the Stern Group, Haganah—weren’t they all terrorist organizations? When Begin killed seventy-nine people in the bombing of the King David Hotel in Jerusalem, wasn’t that a terrorist act? He admits as much. Some time ago in New York, during a lunch in his honor, he began his speech with the phrase: "I am an ex-terrorist." Ariel Sharon: Mr. Begin’s organization did not attack civilians. And Mr. Begin was honorable in telling his men not to hit civilians. The bomb at the King David Hotel was directed at the English military, and the guilt for that episode falls squarely on the shoulders of the English High Commissioner, who had been warned a half hour beforehand but who escaped, rather than evacuating the hotel. We were not terrorists; we were freedom fighters. We were fighting the English occupation. References: L’intervista di Oriana Fallaci ad Ariel Sharon a Tel Aviv, del settembre 1982 (In Italian) Interviews With History and Power By Oriana Fallaci (2016) This gives a lot of helpful details. From the wording I would assume this happened in 1977–1981 and the fact that it's "a lunch in his honor" cuts out a bunch of options (like any interviews or public speeches). However, I'm still not sure what visit this was in order to try and find some context (or the text of the speech). From the quotes it seems that that Fallaci claims that Begin said this, and Sharon then counters that Begin would never say this as he didn't see himself or his organization as terrorists. This is also how he talked about himself and the Jewish resistance. For example, from a 1981 speech: "I came from the fighting resistance. I fear no one." (https://www.youtube.com/watch?v=agAPkTdZbas) @Laurel I'm curious how you guessed the date of writing from the wording? @C.F.G He became PM in 1977 and if you're saying something happened "some time ago" I'd assume you mean at least a year ago. But any of the details she remembers could be wrong and so could the date range I guessed. Partial answer: I don't know, and neither do the Wikipedia editors Some additional context is given by Dankula on the Wikipedia talk page (Google Translated with minor copyedits and added notes): Apparently, there was a typo. Mr. Begin did not tell Oriana Fallaci "I am a former terrorist" either in jest or metaphor. Instead, he started a non-humorous speech in America in front of hundreds of people (including Reagan) with this sentence. "I am a former terrorist". Unfortunately, I do not have access to the original speech (currently) and I trusted Ms. Fallaci and quoted her interview with [former Israel PM Ariel] Sharon. In this interview, both people (Ms. Fallaci and Mr. Sharon) agree that this sentence was said, and even Mr. Sharon, like my good friend [fellow Wikipedia commenter] Sinbad, did not try to make it look like a metaphor or a joke. Rather, it justifies that the target of the King David Hotel bomb [1946] and other Haganah, Irgun, and Stern terrorist bombings were British soldiers, and it remains silent on Fallaci's question about the civilians killed. Therefore, even Mr. Sharon does not question the principle of the case and does not interpret it as a joke or a metaphor. The original speech was delivered in the presence of President Reagan [elected 1980] and during Begin's imposed trip to America shortly before the bombing and attack on Beirut [in 1983?], which probably Sinbad Garami can achieve better than us. I really recommend reading Mr. Sharon's 1982 interview Sharon's 1982 interview is quoted in part on The Free Library and FPIF. The exact citation is "The Washington Post, August 29, 1982, front page, pp. 18, 19)". Unfortunately, here I hit a wall as I cannot access (or even find) that article. This probably couldn't have included all the information either, since it was a 10-hour interview, so maybe it's published elsewhere too (and in a language that's more decipherable than Persian). As for the speech itself, Begin visited the US about a dozen times while PM (and before the interview). Begin and Reagan met on several occasions, such as September 9, 1981, a day later and June 21, 1982, but I don't think these were in New York nor is there anything else that even looks close to the line in question. Begin did give at least one speech in NY in 1978 (May 4th?), though I can't find the full text of that and it's not clear if not-yet-President Reagan was there. He did meet with Carter later in 1978 in NYC, but again I have no real details. Thus the second wall that I've hit. I don't even understand the quote above "I am an ex-terrorist" he did NOT say, he said "I am a former terrorist". What is the difference? @EvanCarroll It looks like Google Translate decided to translate the same text two different ways. It makes more sense after fixing that. Still a remarkably awkward translation. Apparently, there was a typo. Mr. Begin did not tell Oriana Fallaci "I am a former terrorist" either in jest or metaphor. Instead, he started a non-humorous speech in America in front of hundreds of people (including Reagan) with this sentence. "I am a former terrorist". vs "Mr Begin said "I am a former terrorist" and it was neither in jest, nor a metaphor but he didn't say this to Oriana Fallaci: it was said in front of hundreds of people (including Reagan) in America". The negation of having said X to Y when having said X to Z is awkward when done discretely. But I guess it's a Google thing.
STACK_EXCHANGE
Version: 4.6.2 (using KDE 4.6.2) Trying to access https://tim.rz.rwth-aachen.de/mail-lifecycle/ results in: Die Aktion lässt sich nicht ausführen Verbindungsaufbau vom Server abgelehnt Details der Anfrage: Datum und Zeit: Montag, 13. Juni 2011 09:34 Zusätzliche Information: tim.rz.rwth-aachen.de: SSL-Aushandlung fehlgeschlagen Steps to Reproduce: Access https://tim.rz.rwth-aachen.de/mail-lifecycle/ using Konqueror or reKonq Patsed error message appears. Web page opens, as it does in Firefox. This server has a number of configuration errors, and bugs. The reason KIO is having trouble connecting appears to be because it will not accept connections that support TLS1 even when they use the SSL3 compatible handshake: openssl s_client -connect tim.rz.rwth-aachen.de:443 -ssl3 FAILS openssl s_client -connect tim.rz.rwth-aachen.de:443 -tls1 FAILS openssl s_client -connect tim.rz.rwth-aachen.de:443 -no_tls1 WORKS It also has SSL2 enabled which is bad. It also has NULL ciphers enabled which is very bad. Basically KDE should be able to connect to this by using more work arounds, but the underlying problem is a buggy server. eg. if you look in Google Chrome on windows you'll see this message: "The connection had to be retried using SSL 3.0. This typically means that the server is using very old software and may have other security issues. The server does not support the TLS renegotiation extension." suggestion to Qt to enable such a fallback flag in Qt's SSL code: http://bugreports.qt.nokia.com/browse/QTBUG-19860 (In reply to comment #1) > This server has a number of configuration errors, and bugs. The reason KIO is > having trouble connecting appears to be because it will not accept connections > that support TLS1 even when they use the SSL3 compatible handshake: > openssl s_client -connect tim.rz.rwth-aachen.de:443 -ssl3 FAILS > openssl s_client -connect tim.rz.rwth-aachen.de:443 -tls1 FAILS > openssl s_client -connect tim.rz.rwth-aachen.de:443 -no_tls1 WORKS This does not seem to work either. At least it does not work on my system. I get the following when I try it: $ openssl s_client -connect tim.rz.rwth-aachen.de:443 -no_tls1 139725117707944:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177: no peer certificate available No client certificate CA names sent SSL handshake has read 0 bytes and written 141 bytes New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported The only thing that works with this site is SSLv2: openssl s_client -connect tim.rz.rwth-aachen.de:443 -ssl2 What I find curious is that both Chromium and Firefox seem to connect to the site using SSLv3 handshake, at least that is what they report. Therefore I do not understand why that won't work with either QtNetwork or openssl's command line client. Indeed, using openssl 1.0.0c I see the same as you, I can only connect using SSL2. Examining the track in wireshark, the server FINs the connection immediately after getting the CLIENT HELLO message. Examining the HELLO I see I'm sending 45 ciphers suites, and 2 compression methods. Unfortunately s_client doesn't let us disable compression, however I built openssl 1.0.0d with no-zlib and forced ssl3 and was then able to connect. The reason it likely worked in my earlier attempt (comment #1) was because I was using an older openssl that had no support for compression. So, the problem is still a buggy server however we can probably work around it using the SSL_OP_NO_COMPRESSION flag. I've created QTBUG-21906 to track the requirement for access to the compression setting. I've implemented most of what's needed to resolve this for Qt5, but it still needs autotests etc. before I can make the MR. Just an update that s_client does in fact have a no compression option, it just isn't listed in the usage. It's -no_comp as I discovered when I looked at adding this to openssl. (In reply to comment #7) > Just an update that s_client does in fact have a no compression option, it just > isn't listed in the usage. It's -no_comp as I discovered when I looked at > adding this to openssl. Indeed. The -ssl3 or even -no_tls1 works fine when used with -no_comp ; so the problem with that site is not only its missing support for TLS1 but also the fact that it does not support SSL compression. BTW, your patch seems to be for Qt5, but this is actually a bug in Qt's networking code. As such should it not be addressed in Qt 4.8 at the least ? I personally think this would even qualify to be fixed in Qt 4.7.x simply because no one knows how many such sites exist out in the wild. > BTW, your patch seems to be for Qt5, but this is actually a bug in Qt's > networking code. As such should it not be addressed in Qt 4.8 at the least ? Yes, it's been accepted for qt5 and i've also asked for it to be backported. Peter seems basically in agreement that it should be. Git commit d2754fa03025be9324e4d652428eee2c4ca2d4fb by Dawit Alemayehu. Committed on 19/05/2012 at 08:07. Pushed by adawit into branch 'KDE/4.8'. - Fixed SSL negotiation failure when connecting secure sites that do not support SSL compression, e.g. https://tim.rz.rwth-aachen.de/mail-lifecycle/. - Use KTcpSocket::SecureProtocols instead of KTcpSocket:TlsV1 as the default SSL protocol. This fixes very slow connections to certain sites, e.g. "Search for Jobs" button @ http://www.suse.com/company/careers/. - Improve the speed of SSL negotiation by caching and sharing the previous settings amongst ioslaves when those settings are not the default ones. That way any ioslave that connects to the same host afterwards does not have to perform the same expensive SSL negotiation process all over again. M +122 -49 kio/kio/tcpslavebase.cpp M +0 -3 kio/kio/tcpslavebase.h Please note that the fix for this bug requires Qt 4.8 or higher.
OPCFW_CODE
Firebase 5.5.0's dependency on GoogleUtilities/* conflicts with GoogleTagManager 7.0.0's [READ] Step 1: Are you in the right place? I think so? Judging by this previous issue for a similar problem: https://github.com/firebase/firebase-ios-sdk/issues/1384 [REQUIRED] Step 2: Describe your environment Xcode version: 9.4.1 Firebase SDK version: 5.5.0 Firebase Component: Analytics Component version: 5.1.0 [REQUIRED] Step 3: Describe the problem FirebaseAnalytics declares a dependency on GoogleUtilities subspecs at version 5.2 but GoogleTagManager declares a dependency on the (old) umbrella spec at version 1.3. Steps to reproduce: With this Podfile: platform :ios, '10.0' target 'FirebaseDependencies' do use_frameworks! pod 'Firebase', '5.5.0' pod 'GoogleTagManager', '7.0.0' end pod install Analyzing dependencies [!] CocoaPods could not find compatible versions for pod "GoogleUtilities": In Podfile: GoogleTagManager (= 7.0.0) was resolved to 7.0.0, which depends on GoogleUtilities (~> 1.3) Specs satisfying the GoogleUtilities (~> 1.3) dependency were found, but they required a higher minimum deployment target. Thanks for the report. We plan to address this in the next release. In the meantime, a workaround is using the previous release of Firebase - pod 'Firebase', '5.4.1' Internally tracked at b/112272935 Fixed with GoogleTagManager 7.1.0 We have released GoogleTagManager 7.1.0 fixing the dependency - the issue should be resolved now. Hi. Thanks for the fast turnaround on this. 😁 Unfortunately, however, I'm seeing some crashes on launch which I think I have narrowed down to occurring when there are stored events from a previous session that have not been sent yet. (Though I'm not fully certain of that.) I've reproduced it a couple of times by performing a few actions while in aeroplane mode, killing the app, leaving aeroplane mode and then launching a new app session. The exception I'm seeing is in -[TAGHitStore addPendingEvent:]: -[APMValue encodeWithCoder:]: unrecognized selector sent to instance … Setting a breakpoint on -addPendingEvent: seems to indicate that the APMValue object is appearing as the value for the _si parameter of a screen_view event: (lldb) po [$arg3 _ivarDescription] <TAGPendingEvent: 0x28149e880>: in TAGPendingEvent: _allowPassthrough (BOOL): NO _name (NSString*): @"_vs" _origin (NSString*): @"auto+gtm" _parameters (NSDictionary*): <__NSDictionaryM: 0x281a95820> _timestamp (NSDate*): <__NSDate: 0x2818d4810> in NSObject: isa (Class): TAGPendingEvent (isa, 0x1a10545005d) (lldb) po [$arg3 valueForKey:@"_parameters"] { "_o" = auto; "_sc" = SGLaunchScreenViewController; "_si" = "-7837184538219958768"; } (lldb) po [[[$arg3 valueForKey:@"_parameters"] objectForKey:@"_si"] class] APMValue … 2018-08-09 20:39:02.331511+1000 Westfield[2269:415757] -[APMValue encodeWithCoder:]: unrecognized selector sent to instance 0x281a957e0 (lldb) po 0x281a957e0 -7837184538219958768 @cysp Thanks for the detailed report. We understand the issue and are investigating a solution. We have just released GoogleTagManager 7.1.1. @cysp - can you please check if updating to it fixes the crash? Thanks! I've been able to hit the previously-crashing codepath under the debugger and can confirm that it doesn't crash any longer. 😀 I'm still seeing the same exact crash , with 7.1.1 google tag manager , any suggestions? -[APMValue encodeWithCoder:]: unrecognized selector sent to instance 0x28257e780 default 14:29:50.260407 -0400 *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[APMValue encodeWithCoder:]: unrecognized selector sent to instance 0x28257e780' @blolo Tracking internally at b/116807869 @blolo Which version of FirebaseAnalytics are you using? @htcgh we are using FirebaseAnalytics (5.1.0). @blolo Can you please verify if the crash still exists with FirebaseAnalytics 5.2.0? Hi, I'm also facing this issue with pod 'GoogleTagManager', '7.1.0' [APMValue encodeWithCoder:]: unrecognized selector sent to instance 0x28257e780 And I cannot update this dependency because my project is a Cordova one and if I update the dependency the project doesn't compile. Any workaround? I have tried several things, but the problem remains. Thanks in advance! New issue opened at #3053. Please continue discussion there.
GITHUB_ARCHIVE
Have you read the thread? Unlike you or taylor, we are not comfortable with the idea of using a proprietary archive format. Yes, I did. Having spent a few years watching taylor seem to do as much coding as the rest of the team combined I'm willing to believe that his design would have been well-designed and well-thought-out with regards to its application to the FS2Open engine. I see the beginnings of an idea here, but I don't see any kind of in-depth plan. I don't see anybody addressing his comment that the FS2Open CRCs are different than normal CRCs and you'd have to break the networking code to fix it, or addressing what level of compression is adequate, or whether third-party utilities would only compress the files that we want compressed. In fact, I see very little discussion or understanding of how this will work in the FS2Open engine. Will decompression be streamed for certain filetypes and not others? Which filetypes? Will the new pack files be read with the same precedence as old files? Will loading a 7z file be internally compatible with the structures that FS2Open uses to store package filesystem data? Maybe those aren't issues now. Maybe you've rewritten the network code and the packfile code so it bears no resemblance to the old stuff and the time taylor spent researching this stuff is completely irrelevant now. I don't pretend to know. A lot of these are issues that using a custom archive based on VPs can mitigate, but a standard archive format does raise. Sure, it's easier for users, and that's great (except where it's been pointed out that people might take that as license to edit them like archives). I've spent probably a half-hour now reading this thread, digging up that post, and now tonight, explaining exactly how I can see it being useful. This didn't need to be a confrontation, I was trying to be helpful because I figured most people had probably not known or forgotten about that thread since it was so long ago and nobody had brought it up. I didn't even state that I disagreed, and you're already arguing with me. Yeah I'd imagine that was probably one of the reasons it was never implemented - no one had the time to sit down and write/maintain the code for the new format. Of course, no one's sitting down to get LZMA added in yet either, but maybe we can work on that shortly after 3.6.14. This is the second needless assumption that pisses me off. Not only did I end up restructuring Maja in order to support compressed files, but I actually have a CVP handler in there with an 'import' function. An 'export' function is no big deal. I was considering doing that to ease the transition if that was what this discussion came to, but now I'm afraid to admit even thinking about volunteering for fear that you'll take it as further evidence that I think you're wrong and I'm trying to subvert your plans and fight you. Trust me, if I seriously thought you should implement a proprietary format instead of 7-zip and I wanted to fight you on it, you would be looking at pages of writing explaining exactly how I thought your implementation was flawed, exactly why you shouldn't do it, how I could do better, and the plans I was already drawing up for my implementation. As it is, all I've done is repost (and now point out) a set of problems that somebody else spent time finding out, that now you (or whoever goes to do the implementation) doesn't have to spend time relearning or finding out when you make the change and the network code mysteriously quits working or something. And where on earth did anybody get the impression that taylor endorses this or I even think taylor endorses this? The post was four years ago. A lot has changed since then in the computing world. I have no clue what taylor thinks and I am totally uncomfortable getting him involved in this now, for fear that somebody will think he has any opinion on this.TL,DR: Don't assume that people are fighting you and accuse them of doing no work on the subject when they're spending free time trying to help you.
OPCFW_CODE
This web application is for sending informations to customers having mobile phone, email or messenger clients. Functions: Addressbook with groups, simple rich-text editor, templates, message queuing, send worker process (azure functions) gateway api usage. This application is nothing fancy new, so there should be existing projects to reuse / adapt ...system and calendar for independent drivers to show availability. Requires payment to PayPal for membership fee. Requires about us page, contact us. Chat room and direct instant messaging service between members. Requires an independent customer review about the drivers in order to improve standards of service. Requires an members sign in area showing I have a quick job for someone looking for 5* rating and some instant money. I have a website with bootstrap theme. I want "Calibri Regular" font for the site. And 3 pages needs to be designed that can take about 90 minutes. And some menu bar and login pages settings. If you have relevant skills than you can do it. Max. Budget 2 hours. Note: I need We are selling the wireless audio products via shopify, and we would like to hire the freelancer with the expertise in facebook and Google marketing including ads, messenger and instagram ads to drive traffic and generate sales. We will only be interested to work with the freelancer who can give the detailed plan and target so we can track each progress ...development company in Hong Kong which provide mobile app and web development service and solution to our clients since 2011. Currently, we have a request to develop an instant messaging app as a part of a construction management system for inspector and worker to communicate. Since this project is quite urgent (have to be launch within 1.5 months) Looking for a female Virtual Assistant for video chat o...online. Must be open minded and happy to talk/do things of adult nature. Previous experience will be a great plus. If you haven't done it before, I can guide you. Instant start for the right Candidate. Requirements: - Female (+18) - Fast internet Connection - Cam Apply fast. ...have the same mechanics and features but have a different setting and graphics 2. Graphics, animations, sounds 3. Code 4. Integration with facebook messenger - this includes making the project an instant game which complies with facebook regulations, integrating facebook audience network 5. Creating assets for facebook website, creating icon 6. All source ...Offers Management 14. user , subscription and testimonial management. 15. Filter option by location, price, reviews wise 16. User can choose hotel and check its details 17 Instant Booking confirmation Mail and Payment Invoices 18 Guest can cancel the bookings. 19 Past History/ Transaction Must-haves in a Hotel’s Web Booking website : 1. Minimal steps Please refer the web site [log ind for at se URL] -> need to change the existing look, like www.instaemi.com. 2) Need to set mobile view correct like [log ind for at se URL] 3)set slider would be like in [log ind for at se URL] 4)Need to put instant call back & subscribe to our news letter view like in [log ind for at se URL] 5)Need to put Apply Now button below the every ima... I need you to write some content for my current project. Firstly, this is a regular and a kind of bulk work, so there are no possibilities of negotiation. Bid only if you agree with the price. Secondly, the submissions will be made daily on the time we decide, failing will result in termination of contract and the content will not be accepted. Indians are preferred. Please start your bid with the ... I have a mfc exe program with source code. Need to turn it into a website/web program. Make it run on linux server and output the result to web page. Need experience on vpn, network and data communication. Can somebody help me to successfully install Ninject MVC 5 on my sample program in C# ? Must be on my machine using Team Viewer. Create a Custom website based in Magneto 2.26 with error free code -Part 1 (Only Magneto certified should apply) We have to use magneto only as a platform and major work is custom development all the thing are already explain in the excel sheet (Read the excel sheet carefully) We have to optimized this website for Google page speed and GT Matrix Google [log ind for at se URL] Here is the current site I had someone build and they could not finish . Please do not bid if you know you can not handle this project. I do not want to waste my time The site will have 5 different areas.. Market Place, Classified area, Directory listing for businesses, Events, Blogs Please bid only ...packet. 4. I need to send them with close to 0ms delay to a program [into two arrays: an array of destinations/source and an array of lengths] (which can be written in autohotkey script, python, or, maybe, c languages, or something else [can be discussed]). 5. Upon receiving a packet the program should execute a script (I will write the script myself: ...attached file to find out more FEATURES: 1. Uncover hundreds of deep & hidden interests that are not available in Facebook™’s Ad Manager and Audience Insights tool 2. Shows instant performance metrics of each & every interest 3. Pre-qualifies every interest as highly relevant for the keywords a user enters 4. Adds interests directly to your ad manager ...programming page Introductory Only 14 - File Modeling Technical Specifications 14 - Mojo Accounting Program D in the application to know all the financial details 15 - Diversity in the payment method Visa Master Card Mada or payment for receipt 16 - Instant conversation in the application and attach pictures 17 - Determine the distance from the store for I am really looking for someone that knows PHPFOX and is willing to work on an adult Gay Site. If you make a bid please confirm this or you will not be considered. I am running version 4.7 of PHPFOX and I need cometchat, installed and working (I have a license for it and will provide the details). I also need FFMPEG installed and working for video I would like the app for educational purposes It will have the following features 1) File sharing (images, pdf’s, documents, etc) 2) Instant messaging ( message between users and group messages as well) 3) The interface will have a dropdown list to select a group to join ( phsyics group, math group, literature group etc) 4) There will be an administrator Hi as the title, Build me a facebook chat bot messenger that convert!! . I run a business in sydney using shopify and looking to have a big boost from this new trend , Give me the best price and bid. Need job done asap
OPCFW_CODE
- Programming in C, C++, Java, Ruby+RoR, TCL/Tk, Go, Shell Script, and SQL. - Java (SE and Android) and C++ Applications debugging, profiling and tuning. - High expertise in network related protocols like HTTP, SIP, TCP/UDP/IP, DNS, Radius and Diameter. - Software design, specification, integration, development and quality assurance. - Great experience developing and designing multi-threading distributed systems with demands of high-availability, high rates of traffic and very low processing latency. - High expertise in Linux based operating systems (Red Hat, CentOS and Ubuntu) – system administration, troubleshooting, advanced POSIX, socket and concurrent programming and software packaging. - HTTP, Websockets, SIP and DNS network packet and communication flow analysis. - Great competence developing software following standard object oriented design and functional patterns and code style standards. - Database Systems: PostresSQL, SQLite and MySQL. - Design and implementation of tools for automate the integration tests, system tests and pre-production environments. - Native Android Development - Test automation and Unit testing with frameworks like jUnit, Mockito, PowerMock and RSpec. - Software Tools: gcc, rpm, gdb, eclipse, svn, autotools, ant, maven, advanced git, Wireshark, Android development Tools, and JIRA. - OpenSSL Programming - Docker Advanced User - Agile methodologies January 2014 - Current Axway - Senior Software Engineer - Implementation, design and development of features for Axway APIGateway and API Manager. - Designed and developed the APIGateway integration with a cryptographic external HSM device. With this integration, APIGateway is able to forward Java and native cryptographic operations to an external security device that manages private and secret keys. - Part of the team that developed the multi-threaded pipeline for sensitive API Traffic redaction. All the payloads processed by APIGAteway are intercepted and all the sensitive information is removed before it gets stored on the filesystem. - Part of the team that implemented the WebSockets feature for APIGateway. The application is able to act as proxy websocket server, intercepting, authenticating and authorizing websockets messages in both ways. - Support and maintenance of old APIGateway versions. - APIGateway application tuning for high level of demand use scenarios. - Part of the team that developed Swagger 2.0 Specification support for APIManager. - Mentoring Junior developers. Technologies : C/C++, Java, JNI, OpenSSL,Rest APIS, Jython and Docker. October 2012 - January 2014 Airtel ATN - Senior Software Engineer - Contributed to ATN (Aeronautical Communication network) CM,CPDLC and ADS-C Test Tools by designing/developing several new features and resolving existing bugs. - Requirements management and architecture specification. - UI Wire framing and Specification. - Automated Unit and integration test design. - Developed some ASN.1 parsing tools. Technologies : C/C++, Java, TCL/TK ,Ruby. October 2007 - October 2012 PT Inovação | Outsoft Software Engineer - Management and definition of software requirements and issues. - Application UML Modeling and Business Logic Design. - Application Profiling, Tuning and Debugging - Multi-Threading and Network related programming for real-time systems. - Write user & software documentation. - Unit Testing, Test Automation and Quality Assurance planning. - Design/Development of software products according to the established company life cycle model. - Linux/Posix programming, administration, troubleshooting and packaging. - Supported and guided other Junior Engineers in various development and learning tasks. - Collaborated with 24/7 support team solving some production issues - As C/C++ Developer I was involved in the design/development of a large multi-treaded SIP B2B Server used in a IMS network that provides wise services over an existing SS7/Camel4 Infrastructure. - SIP Protocol flow analysis and designed some SIPp automated tests. - My main roles in this project were application data modeling, multi-thread system design, testing design, and software packager. The system is composed by some JavaSE components with a RoR (3.X) web administration interface. - Technologies : Java, Ruby, Ruby on Rails and Bind - Developed/Designed an application module to balance and route the SIP signaling across a farm of Application Servers and Media Servers. - The application is used for scaling up some SIP applications in IMS/VoIP Network. - Technologies: Opensips, C and C++ SMS Charging Gateway - Developed a multi threaded module capable of contacting a online charging system that handles all the SMS Traffic from a Telco in Africa - Designed a test tool that automatically validates a pre-production system. - Technologies: C/C++, Linux - Leaded the development of some components used by several successful products in the company for managing database connections, logging, real-time event processing/persistence, real time analytics (calculation & aggregation) and centralized system configuration version management. - Technologies: Java, C/C++, Ruby and Perl. January 2006 - June 2007 Rederia - Software Engineer Aveiro , Portugal - Collaborated with the Aveiro Domus’s Domotics and Communications Team designing the necessary functional and technical specifications that will allow the construction of the House of the Future . - WIFI and Ethernet Network design and planing . - Developed some customized VoIP PBX systems based on the open source framework Asterisk . - Asynchronous Android Programming - Book Author for Packt Publishing - Asynchronous Android - Book Technical Reviewer for Packt Publishing - Designed and developed the native android application SMS Scheduler Lite and SMS Scheduler Pro (available on Google Play) - Designed and developed the native android application Acordo Ortográfico (available on Google Play) - Designed and developed for a third party client the native android application Keytroller SMS with Dropbox API and Sensor Metering Integrations. - Monocline Records - Cofounded and Comanaged the label between 2008 - 2011 - Re:Axis - Cofounded and collaborated in the electronic music project between 2006-2010 - Developed the open-source Ruby API for parsing Google Play Store Github - Created a Tool to resize Android resources images automatically in the command line Github - Developed a Boilerplate Library(Java) for Android Development Github Electronic & Telecommunications Engineering September 2000 - December 2006 Aveiro University, Aveiro , Porttugal - Electronic Radio Frequency Systems Development - Operating Systems Design & Architecture - Network Security on Communications systems - Digital Signal Processing - SL-275-SE6 Java Programming Language - Behavior ,Aveiro 2007 - FJ-310-EE6 Developing Applications for the Java EE Platform - Rumos, Aveiro 2008 - Beginning Development Web Apps with Ruby on Rails – Galileu - Aveiro 2009 - Machine Learning - Coursera (Standford University) - June-July 2013 - Music Production - New Technologies
OPCFW_CODE
Attention gentoo/C++ developers who use boost library! =dev-libs/boost-1.50.0-r2 there is no more eselect profiles! So, no way to switch between few versions of boost! And no way to easy (lazy :) detect it from your configure scripts (only hard way :). Tiziano Müller, the author of =dev-libs/boost-1.50-r2 ebuild, which is recently (silently) appears in a portage tree kindly gave me some explanations about future directions according boost and eselect: Yes, that change is intentional. I know that this makes the live for people using Gentoo as development platform harder, but unfortunately this is how we have to proceed. See my announcement on the gentoo-dev ml here: http://marc.info/?l=gentoo-dev&m=134580187015362&w=1 and an earlier discussion here: http://marc.info/?l=gentoo-dev&m=132704075103126&w=1 I'll do an official announcement on gentoo-dev-announce (and maybe a news item) at least when it hits stable (possibly already if it gets unmasked). So we (C++ developers) have to do smth w/ this, cuz live getting harder %) To make life little simple the ebuild creates a bunch of short (unversioned) symlinks to the directory, so sometimes it would be enough to add -L/usr/lib/boost-1_50 option for linker -I/usr/include/boost-1_50 for compiler. Fortunately cmake (my primary build system) has a good enough boost detection support, but some packages in my system got broken :( and I have no time to fix ‘em. Particularly schroot has broken and unable to detect boost anymore… and to fix it configure.ac needs sane boost detection (yep, nowadays it is simple and naive… u even unable to specify a custom location for it). Or alternative way is to hack an ebuild and provide xxFLAGS environment before configure. This would be easy than rewrite a boost detector in the schroot’s Update: I’ve hacked the schroot ebuild and add it to the bug reported… (here is a copy in my overlay) =net-libs/telepathy-log-qt-0.10.2 fails to build also, cuz implicitly depends on boost via qt-gstreamer which is use some header-only libraries. A bug related to qt-gstreamer is Update: My report about schroot was included as blocker to another one (bigger) bug list of packages that become broken w/ boost-1.50.0-r2. Separate bug about telepathy-log-qt is here. Update 06-Sep-2012: Another one victim found. blog comments powered by Disqus
OPCFW_CODE
Discussion in 'Article Discussion' started by Sifter3000, 26 Jul 2010. would ARM really give intel a run for its money on larger faster chips? I'd love to know - goes back and switches on the A4000 and hugs self Don't worry Intel, this move is completely 'ARM-less ARM won't ever challenge Intel in the market for desktops or even mid-range laptops, its chip architecture isn't designed for that. What's more likely are ARM-based netbooks/smartbooks/tablets, and Microsoft's covering its bases in case this takes off and an ARM build of Windows 7 is needed. How often will you be upgrading your PC? How often will you be changing your smartphone? Most (and I don't mean us!) people will change their phone 2 maybe 3 times before any major PC change. If ARM is becoming the norm for smartphones then this could be a very clever move by Microsoft. Most enthusiasts do change their PC or it's components alot more often but we are the minority. Given the number of pies Intel sticks it's fingers in, I don't really see them worrying too much. Maybe MS has a lighter version of windows 7 with a touch-friendly interface in the pipeline? Something capable of web browsing, content consumption and maybe basic office stuff, with a fraction of the overhead of full-blown Windows? That would fit quite nicely with a MS/ARM collaboration that produces iPad-whipping tablet and Atom-smashing (sorry) netbook CPUs. Microsoft, if you think that using ARM for your next Xbox will cut down on the noise and RRoD events... Good Lord, you still have an Archimedes ? Damn, i remember seeing Tetris at my cousin's house on his Archimedes...and the Flag demo, blowing in the Wind... A Sweet machine I rescued a two-slice RiscPC (with 52x CD-ROM, 40GB hard drive, and 250MB ZIP drive, no less) from a skip once. Still works, too. Also, I think Intel drew blood first on this lovein by showing the world Android running on an Atom based device in April this year. Maybe it is for some media player/hub device (apple tv kind of thing). I am considering XB360 for its media capability (movies, skyplayer, maybe zune music in the future) and for that the full fat 360 is overkill. If MS can make it small and quiet that it would be a nobrainer for me. Another nail in Atom's coffin. You either want lots of power, or enough power with low power consumption. Atom is in the middle and could easily be killed off by an ARM version of Windows. yep.. .though its not plugged in right now - anyone got a copy of zarch about? @Gareth Halfacree... NICE!!! I love how everyone's jumping on this and going "ARM WINDOWS 7 OMG!!!", and completely forgetting Windows Phone, Windows Embedded, Zune, and the other Microsoft Hardware arms that run on ARM. Perhaps this is a sign that MS is going to make their own reference phone for Windows Phone, or maybe they're thinking beyond the Zune HD. Or, considering that the XBox 360 runs on a triple-core PowerPC processor (RISC) already, perhaps they're working with ARM for something for their XBox 720. Maybe Kinect is running with an ARM processor to do the limb detection and the like. Pretty sure Windows 7 ARM is not on the cards. No more than a resurrection of Windows NT for MIPS, Alpha or PowerPC is on the cards. I wonder if the X360 kernel came from the remains of NT4.0 for PowerPC? Food for thought... Archimedes, now that is a blast from the past indeed! Those beasts were stonkingly fast and also used to come with a PC emulator as I recall ) It was also the first machine that got me i to wanting to run video on my lowly Amstrad PC1512 with it's multitude of 4 colours from a palette of 16. My how things have come a long way since those bygone days... I think having a new CPU ARM's race can only be a good thing and whatever they bring to the table in future products will hopefully be good for the consumer too. I vote for a new version of Windows CE, but who knows. just another way to stay competitive.. levick might have a point about atom though Clever? WindowsPhone7 hasn't even been released yet. What a f**king joke. If I were a MS shareholder I'd be well pissed and calling for Balmer to be given the "mother-of-all" enemas. Under his stewardship, we've witnessed the stillbirth of Vista, IE becoming an irrelevance and now WP7 looks like it'll arrive at the orgy after everyone else is already shagged-out. come on ARM! Now I feel depressed. It's just M$/Balmer trying to squash the Open Source powered netbooks and smartphones... Throw enough money at a product and people will be forced to use it... I don't think you know the meaning of the word "clever" and if you do it's a case of knopwing but not "taking it on board" I enjoyed vista and had no problems, i also chose new hardware to go with vista .. old pc's and vista didnt mix and it was intended to be for newer multithreaded devices with plenty of ram and grunt ...it did as intended, and it did it well, Win7 also has all the same problems vista had, albeit a little more responsive I still use IE too, i like firefox and all the others but IE came with my windows and works with everything anyway (i know it leaves little desire for a developer using CSS though ehe). Not really caring about winmobile7 as i will be firmly with the Meego,Maemo camp for mobile devices for the forseeable future ... ..So in short, jsut because you do not like them, and had problems with them, it does not mean everyone else is in the same boat ...you mus "hold you PC wrong" maybe? Nice to see that ARM will have some further support in future, they do deserve it for their hard work over the years ^^ Separate names with a comma.
OPCFW_CODE
Can you import Mass Effect 2 PS3 save into Mass Effect 3 if the games are from different regions? Most PS3 game save I tried aren't compatible with different regional versions of the same game, but does Mass Effect 3 allow you to import a Mass Effect 2 save from a different region? For example, if I played Mass Effect 2 off a US disc (Region 1) and start Mass Effect 3 with a UK disc (Region 2), will my old save work? PS3 games are region free but the game saves are apparently not. I haven't used any cross-region games on my PS3, so I haven't experienced the particulars. Someone does suggest a (fairly complex) method of getting around PS3 save region lock though: (you need a pen drive, PSP or any other USB mass storage device) On your old game/PS3, copy the save to the device. (select it, press TRIANGLE and choose Copy. Choose the device) Go to a computer and plug the device in. Go to My Computer>your device> Open the PS3 folder and then open SAVEDATA. Copy the folder to Desktop then delete it from device. On your new game/PS3, start a new game and save. Copy the NEW save to the device. (select it, press TRIANGLE and choose Copy. Choose the device) Go to a computer and plug the device. Open PS3 folder again and then open SAVEDATA. Copy that folder to a new folder in desktop. Now, rename the NEW folder so that it has the same name as the old folder. Open the NEW folder. Here you should see these files: PARAM.PFD PARAM.SFO PIC1.PNG ICON0.PNG SAVE.DAT (if there are more files that's fine. Sometimes PIC1.PNG isn't there, that's fine too.) Open the OLD folder. You should see the same items. THIS IS IMPORTANT: MAKE SURE YOU DO NOT MESS UP. Copy PARAM.PFD and PARAM.SFO from the OLD folder to the NEW folder. If you do it the other way round you'll mess up. Delete the saved file from the device. And also delete the OLD folder. Drag the remaining folder into PS3/SAVEDATA/. Goto your PS3, select Saved Data Utility, choose your device and find the data. Copy this (TRIANGLE) then you're done. Delete save from device. Play the game and enjoy. This worked for me -- it should work for you too. This should work for any PS3 game, ME3 included if ME3 is indeed region locked; and since it's EA, I would go ahead and assume they did as much locking and DRM as feasible. This will only work if the save can be copied to the PC at all, I know EA tends to lock their save so you can only copy them to the cloud, although I don't know if that's the case with ME2's save. This will also require I own both region versions of ME2. @JohnoBoy I don't think that's the case with any PS3 save, are you sure that's not something they only do with Origin? You should be able to do this if you get the transfer region ME2 save from the internet too. GameFAQs usually has saves for popular regions I know Dragon Age's 1 save is copy locked as well as Shadows of the Damned, Alice, Rock Band 1&2 and several others. Not all EA games are like that, but looking over this list you can see it's often the case http://www.ps3trophies.org/forum/general-ps3-discussion/62819-ps3ts-official-locked-saved-games-thread.html This works with only save files that can be copied if you have a stock PS3 firmware. If you are using custom firmware, you can use a file manager or multiman to copy the GAMEDATA folder to a usb stick and use PS3Tools from aldsotools to hack the save file's region.
STACK_EXCHANGE
background: i have 2 sites at DH. mine, and another company. kfry (Aug 16th, 2006 - 08:38:44 / #8225914) - i’m informed by karl that a header injection was used to send spam from my site. the mail form on my site is unimportant. the mail forms on the other company’s site are important. the mail form on my site has been disabled. clickignite (Aug 16th, 2006 - 09:10:46 / #1477815) - i call the other company, their stuff works fine, so i assume just my site is affected. i email DH to confirm this. chatra (Aug 17th, 2006 - 11:09:11 / #8235990) - i receive notice that all my sites are unable to send email from the mail forms on all of my DH sites. it’s thursday, and the other company is at a convention, they don’t care about the mail forms, say it can hold to the weekend. clickignite (Aug 20th, 2006 - 12:16:01 / #1482625) - i fix the scripts (they were broken at the time since i didn’t have the zend framework on the system, but at least no longer insecure), i send emails to support and abuse @DH, saying that i’ve fixed the problem. brian (Aug 20th, 2006 - 23:26:29 / #8263372) - i am told that the issue will be transferred to karl. note that verifying that the scripts are secure will take 2 minutes tops. clickignite (Aug 21st, 2006 - 06:22:08 / #1483348) - i ask if karl will be in today jefft (Aug 21st, 2006 - 18:19:14 / #8270790) - i am told that karl is on a different shift than the guy who answered the email, and that he sent karl a note. clickignite (Aug 21st, 2006 - 20:18:02 / #1484681) - i reiterate the fact that this would take 5 seconds for each of the 3 scripts to be checked out, and how badly i need this fixed. ** i now have inadvertantly opened 2 tickets, both of which are unresolved. clickignite (Aug 22nd, 2006 - 14:12:05 / #1485687) - i open a 3rd ticket, mark it super-important, explain the story. it’s been 2 and a half days since i requested that my sites be allowed to send email again, and i still can’t get anything more than, “sorry, i’ll tell karl”. these problems are revenue impacting for the other company i have a site for…and before you say “you should have a dedicated box” if it’s that important, know that it’s a small company that gets very little traffic and doesn’t require a lot of fancy stuff. i feel like that if dh’s services were as they advertised, i wouldn’t be as frustrated as i am right now… we send emails into the customer support abyss, wait for the mocking “it’s almost been 24 hours…” email, then pray that your issue is resolved in the first response, otherwise it’s time to get in the back of the queue again, wait another day rinse, repeat. i’d upgrade my plan so i could get callbacks, but reading the posts on the forums leads me to believe that it wouldn’t make a difference. it’s been 2 and a half days since i requested a very simple procedure to be done. i’m hoping karl isn’t on vacation. although doubtful, hopefully someone at dh will see this and remedy the situation.
OPCFW_CODE
📢 New Listing: Filecoin 6-month (FIL6) & Special Subscription We will be listing FIL6 (Filecoin 6-month) and opening trading for FIL6/USDT with the specific details as follows: 💰 Opening time for trading: 2020/06/29, 18:00 (GMT +8) At the same time, to celebrate the official listing, BiKi will be launching a “HODL OKS/ODIN and Get FIL6 Subscription” and the details are as follows: 📌【HODL OKS/ODIN and Get FIL6 Subscription】 BiKi Power will be launching the “HODL OKS/ODIN and Get FIL Subscription” from 11:00 (GMT +8) on 2020/06/29 and you can click BiKi Power on our homepage to check or participate. 🔖Subscription time: 2020/06/29, 11:00–13:00 (GMT +8) 🔖Total quota: 20000USDT 🔖Subscription price: 1FIL6=10.23USDT 🔖To participate: Users with more than 100ODIN or more than 500OKS shall be eligible for the subscription 🔖Purchase limit per user: 200USDT 🔖Participation limit per user: Once only 🔖KYC requirements: Not necessary 🔖Subscription mode: Over-raising (300%) 1. The platform will deduct the USDT for the subscription at the time of subscription so participating users are reminded to ensure sufficient available balance in their Exchange Account. 2. Google binding + mobile / email verification is required for this subscription. 3. After the subscription is completed, the subscribed tokens will be released to the successful subscribers not later than 2020/06/29, 18:00 (GMT +8). 4. The final interpretation rights of the activity will be in the sole discretion of BiKi and please refer to the official announcement on our homepage for the accuracy of the activity contents. Name: Filecoin 6-month Official Website: https://filecoin.io/zh-cn/ The InterPlanetary File system (IPFS) is a global, peer-to-peer distributed version of the file system that aims to supplement (or even replace) the hypertext transfer protocol (HTTP) that currently dominates the internet, by connecting all computing devices with the same file system. The principle is to replace the address based on the domain name with a content-based address, that is, the user is looking for content that is not an address but stored in a certain place. There is no need to verify the identity of the sender, but only the hash of the content. It can ensure the web to be faster, safer, more robust and more durable. Filecoin is an incentive layer on IPFS and a a decentralized storage market built on IPFS based on the token incentive model.FIL6 is the Filecoin contract issued 6 months after its launch Note: The above information is provided by the Project and strictly for reference only. Follow us on: English Telegram: https://t.me/BiKiEnglish Vietnam Telegram: https://t.me/BiKiVietnam Chinese Telegram: https://t.me/BiKicoin Russia Telegram: https://t.me/BiKiRussia Philippines Telegram: https://t.me/BiKiPhilippines Nigeria Telegram: https://t.me/BiKiNigeria Iran Telegram: https://t.me/BiKiIran Indonesia Telegram: https://t.me/BiKiIndonesia Bangladesh Telegram: https://t.me/BiKiBangladesh India Telegram: https://t.me/BiKiIndia Arabic Telegram: https://t.me/BiKiArabic Korean Community: https://open.kakao.com/o/gYmlp4Yb
OPCFW_CODE
I bear in mind my first fumble with fundamental on my ZX Spectrum computer again within the Nineteen Eighties, ploughing by means of pages of fundamental instructions and example code with none real idea of how I may write packages myself. Be taught in baby steps – Begin with one thing very simple, and add to it. There is no such thing as a advantage to leaping in with each toes unless you have got limitless time and assets. The packages vary from RESIDE sports activities, news, movies, radio to music movies and so forth. Every software program ranges from $forty to $60 and is inexpensive for most individuals. There are a lot of reasons for eager to learn laptop programming, and what you wish to do with it might probably help information you in selecting your path in studying. Actual pc programming could be traced again to the 1880’s and the recording of knowledge that was then learn by a machine. In structured programming, the program is break up into small codes that can simply be understood. 3. Packages that need Object Oriented Programming (OOP) are written in C. One different to “visual” vs. “text” is “codeless programming”. Computer programmers are in a position to take pleasure in engaged on quite a lot of initiatives because of the traits and skills they possess. Everyone knows that computer systems work in bits and bytes and it reads and understands binary digits zero and 1. If you are free to make a program in any language you need, it must be reworked into the languages of Os and 1s before it may be carried out. Historical past Of Laptop Software And Programming Most individuals use their pc with out realizing how they operate “underneath the hood”. With fewer COBOL coders available, firms typically should pay COBOL programmers a higher salary. Your program code must be written as step-by-step instructions utilizing the commands that your choice of programming language understands. That is sensible, for the reason that web has been created and programmed by programmers. A pc programming diploma is a very valuable asset in each resume because it lets you move forward in your software program programmer career, making certain you a better payment. Quite a lot of programmers are prepared to share their data through free tutorials, boards, suggestions websites, and articles. computer programming languages designed for databases are called, computer programming jobs from home, computer programming courses In the event you intend to develop into a successful computer programmer, and even when you want to learn laptop programming, here is a few of the widespread programming languages which can be most demanded available in the market and can fulfill every kind of programming issues. Ko 4 explains that end-user programmers should be allowed to deal with their goals, and an necessary a part of the answer is to visualise the entire program execution not just the output, so it’s crucial to show the consumer the whole program flow not just text based bug studies. Drag And Drop Programming Algorithms are particular formulas, or purposes of a specific theorem, that might be converted for different variables. Until now, there are many programs for embedded technology which can be created in assembly language. The programming outcomes from such a programming approach are also nativeNet Framework objects and can be straight utilized by different laptop languages supportingNet Framework. It’s best to make a stream chart to your program, or write its algorithm before you begin with the process of writing this system. High employers of laptop programmers embrace software program improvement firms who create packaged and specialized software program. computer programming jobs list, computer programming degree online, computer programming schools, computer programming schools in utah, computer programming jobs salary Clearly no new freelance pc programmer goes to walk into huge contracts for extensive programming work with Microsoft or IBM, or win excessive worth jobs with fortune 500 corporations.
OPCFW_CODE
Learn how to process WingtraOne images with Pix4Dmatic. Before installing Pix4Dmatic read the minimum and recommended hardware requirements and ensure that your device fulfills the requirements. Pix4Dmatic is faster and more reliable than Pix4Dmapper for corridors and large datasets of more than 5000 images. Use WingtraOne images that have been geotagged using WingtraHub version 1.0 with Pix4Dmatic. Images tagged with older WingtraHub versions do not include orientation and accuracy information in the EXIF and therefore will not be correctly processed in Pix4Dmatic. Step 1. Create the project in Pix4Dmatic Open Pix4Dmatic and drag and drop the images or select the folder from the disk in the highlighted area. Specify the project name and the disk location where the project will be saved and click on Start. Step 2. Image and output coordinate system Pix4Dmatic reads the horizontal and vertical coordinate system from the EXIF of the images. The images should have embedded geolocation and orientation, as importing .csv files is not possible in Pix4Dmatic. For example, for PPK geotagged WingtraOne images where the base location in WingtraHub was provided in WGS 84 system, the horizontal image coordinate system in Pix4Dmatic is automatically set to WGS84 - EPSG:4326 and the vertical coordinate system is automatically set to ellipsoidal height over the WGS84 ellipsoid. The coordinate system of the outputs is defined based on the location of the project. If the image coordinate system is WGS 84, the corresponding UTM zone is used as the output coordinate system. The output coordinate system can be seen at the bottom of the window. Step 3. (Optional) Import GCPs When GCPs are used, the output coordinate system corresponds to the GCPs coordinate systems. If the GCPs coordinate system is geodetic the corresponding UTM zone is used for the output coordinate system. You can import the GCPs by clicking on the highlighted button. Import a .txt or .csv with the GCPs coordinates following the format specifications of the Pix4D article and specify the coordinate system, horizontal and vertical, to which the GCPs refer. After the GCPs coordinates have been entered, you need to mark the GCPs. Step 4. Image calibration After creating the project and importing and marking the GCPs (optionally), the calibration of the images can start. RX1 images are of high resolution. To speed up processing select the 1/2 option for the image scale. Then click on Start to start with the calibration process. Step 5. Assess the calibration results Once the images' calibration is finished, take a look at the report in Pix4Dmatic and see the number of calibrated images and the average GSD of the project. In case GCPs are used, check the mean GCP RMS error to assess the accuracy of your project. Step 6. Generate all the outputs for the project Generate the point cloud, DSM and orthomosaic of the mapping area. Select the options Densify, DSM and Orthomosaic and click on Start. Once the processing is finished, you can export the report in a .txt file format.
OPCFW_CODE
I will provide some pictures use machine learning to detect the door 12 freelances font une offre moyenne de 9458 ₹ pour ce travail Door detection in ML. As 9+ years experiences in these field. I can give good quality work. I have read the guidelines of your work.I believe that i can provide you the best quality works you are anticipating from thi Plus ***** Computer vision and machine learning expert with full experience in door detection ***** Hello. I read your description and I am interested in your project. i have rich knowledge and enough experience in door det Plus I am a machine learning engineer having 5+ year of experience. My skills-- Machine learning,deep learning,image processing,Open CV, kaggle project, python, R, data analysis, software development. Deploy ml models to Plus Hello, Hope you are doing great! Do you want to build the door detection model from scratch or you want to use some already existing model? (I will help you out doing any of them based on your requirement) I have Ma Plus Hello, dear! Nice to meet you! I have read your requirements carefully and I am very interested in your project. I am confident of this project as I'm a professional Image Processing and Machine Learning expert with Plus Hi I feel very excited in this project development because I have developed lots of door and window detection projects in floor plan images using Tensorflow and CNN models. For this, we have to train the custom deep le Plus HELLO SIR I BELIEVE WHAT U WANT IS A ML MODEL WHICH CAN DETECT DOOR IN AN IMAGE OR A VIDEO.I BELIEVE IT IS POSSIBLE TO ACHIEVE THIS IN OPENCV I I am a co-founder of an Artificial intelligent software startup that works on Face recognition, Speech recognition, Machine learning and other AI stuff. I can help you in detecting doors in images using AI I have worked in the image classification problems like face mask detection ,gender detection using deep learning. Also I have a working experience of classical machine learning. Hi, I have worked as developer/analyst/manager for years. Currently I've been working as a freelancer. As for machine learning my main focus is on object detection and facial recognition . Projects developed : - Plus Hi, I am a python developer and certified AI analyst from IBM. Also I have experience in c, c++ programming and knowledge about SQL. I can help you to do your project, if you would provide me a data. Hi, As a Big Data Engineer, I believe that I could help you achieve a good model, with a great accuracy, in order to detect doors in images. Let me know if my skills are of interest for you. I'm looking forward to h Plus
OPCFW_CODE
import numpy as np import pandas as pd import matplotlib.pyplot as plt plt.rcParams['figure.figsize']=11,6 # make the chart wider import altair as alt artwork = pd.read_csv('https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2021/2021-01-12/artwork.csv') artists = pd.read_csv("https://github.com/tategallery/collection/raw/master/artist_data.csv") artwork.id.count() artwork[artwork.artist=='Turner, Joseph Mallord William'].count() artwork.artist.nunique() #3336 unique artists # top 10 artists - Turner, Joseph Mallord William has 39,389 works artwork[artwork.artist!='Turner, Joseph Mallord William'].groupby(['artist']).nunique()[['id']].sort_values(by='id', ascending=False).head(15).plot(kind='barh', title='Top 15 Artists with most artwork!') # what time are the artists from? most artists from 1900-1980 artists.groupby(['gender','yearOfBirth']).nunique()[['id']].reset_index().pivot(index='yearOfBirth',columns='gender',values='id').plot(title='What time are the artists from?') # wordclouds from PIL import Image from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator # wordcloud of artwork titles # combine the text from all rows of the desired column into one big text variable df1 = artwork text = str(df1.title[0]) for i in range(1,len(df1)): text = text + ' ' + str(df1.title[i]) # Create stopword list: These words wont be included in the word cloud stopwords = set(STOPWORDS) stopwords.update(['an example stopword','Blank','title']) # Create and generate a word cloud image: # wordcloud = WordCloud().generate(text) wordcloud = WordCloud(stopwords=stopwords, max_font_size=50, max_words=100, background_color="white").generate(text) # Display the generated image: plt.imshow(wordcloud, interpolation='bilinear') plt.axis("off") plt.title('wordcloud of artwork titles', fontsize=24, color='firebrick') plt.show() # wordcloud showing where the artists are from df1 = artists placeOfBirthText = str(df1.placeOfBirth[0]) for i in range(1,len(df1)): placeOfBirthText = placeOfBirthText + ' ' + str(df1.placeOfBirth[i]) # Create stopword list: These words wont be included in the word cloud stopwords = set(STOPWORDS) stopwords.update(['an example stopword','nan']) # Create and generate a word cloud image: # wordcloud = WordCloud().generate(text) wordcloud = WordCloud(stopwords=stopwords, max_font_size=50, max_words=100, background_color="white").generate(placeOfBirthText) # Display the generated image: plt.imshow(wordcloud, interpolation='bilinear') plt.axis("off") plt.title('A wordcloud showing where the artists are from.', fontsize=24, color='firebrick') plt.show() # Artists from india artists[artists.placeOfBirth=='Bharat']
STACK_EDU
If you've been working with Coldfusion very long, chances are you've written a data import script. There are many tools that allow you to migrate data from one database platform or schema to another, and I'm well aware that "guru dogma" states that Coldfusion is not the best tool for things like long running tasks that can be performed by the database. I'm also a big advocate for letting the database do its job. So it may surprise you to learn that I believe Coldfusion is actually a pretty good choice in many cases - especially if you have to do anything tricky with the data. Take looping for example: In T-SQL, if you want to loop through a record set and apply logic to the data you will need a cursor. Cursor code isn't hard to write, but it will definitely take you longer and require a bit more skill - especially debugging. Take this example from books on line.... This code works by declaring a query as a cursor, looping through each row and loading local variables @au_lname and @au_fname with values from the row with each iteration. It takes a good bit of cryptic code to do what Coldfusion can do like this: Sometimes data import tasks are very challenging and require a good deal of data manipulation. Yes, you can write very complex SQL to do such things, but you can do it faster in Coldfusion. If you read my blog you know I'm an advocate for letting the Database do what it does best. But I'm also an advocate for being cost effective. If you are doing a data migration as part of a deployment, then there will be times when Coldfusion will allow you to quickly import records that may have taken a much greater effort in SQL. For example, you may be tasked with merging 2 databases and eliminating the duplicate data between them. You might write code like this. In this simple example we are checking to see if a username from "oldtable" exists in "newtable" and if it does not, we are inserting it. Obviously, typical migration code is vastly more complicated. In this case we could eliminate the "check" query using SQL like this (which I prefer): When you do this sort of thing you have an issue you may not have thought about. When using Coldfusion Each connection to the db is an implicit commit. If you were doing it in T-SQL then each block would be an implicit commit. If, during SQL cursor code, an error is thrown, the transaction is rolled back - as if the process had never begun. This is not the case in your Coldfusion code however. That leaves you with a possibility that, in the case of an error, your data would be half imported to "newtable". This is where cftransaction comes to the rescue. You can treat the loop code as if it were a block of SQL - even though it may contain things other than cfquery. An exception will cause a roll-back of all the DB calls that have been made so far. Using cftransaction is simple. Note that I'm not using a isolation level and I'm not explicitly telling the db to commit. The isolation level is probably not an issue if you are doing a pre-deployment data migration script. The "commit" is implied by the end transaction tag. You can get quite granular in your control of specific commits and roll backs if you like, but in this example we are saying simply, "if the process errors out before it completes then kill the whole thing". It's all or nothing. Now you may be wondering about the accuracy of your duplicate record check. If I insert a username "BOB" inside of my cftransaction, then 10 rows later I try and insert "BOB" again, will the database "see" the first bob? After all, it's not committed yet - right? The answer is yes. The database will see bob number 1 and not insert bob number 2. Event though the transaction is not yet committed, the db can read from the new pages because it is "inside" the transaction. Your checks will work as you expect.
OPCFW_CODE
May 05, 2020 · Network Manager. Network Manager aims for Network Connectivity which "Just Works". The computer should use the wired network connection when it's plugged in, but automatically switch to a wireless connection when the user unplugs it and walks away from the desk. When set to 'true', NetworkManager quits after performing initial network configuration but spawns small helpers to preserve DHCP leases and IPv6 addresses. This is useful in environments where network setup is more or less static or it is desirable to save process time but still handle some dynamic configurations. May 05, 2020 · Network Manager. Network Manager aims for Network Connectivity which "Just Works". The computer should use the wired network connection when it's plugged in, but automatically switch to a wireless connection when the user unplugs it and walks away from the desk. On Ubuntu desktop, network manager is the default service that manages network interfaces through the graphical user interface. Therefore, If you want to configure IP addresses via GUI, then the network-manager should be enabled. An Alternative to Ubuntu network manager is systemd-networkd, which is the default backend service in Ubuntu server また、ネットワークの設定で特定のWiFiだけDNSを変更していた場合は、NetworkManagerはその値を使用します。 NetworkManagerの仕事. NetworkManagerは、DNSサーバーを取得した後、設定ファイルのdns項目に基づき以下の振る舞いをします。 In most cases your initial DNS settings are pre-configured to utilize your Network Solutions services. For advanced users, Network Solutions allows you to manage your name servers A daemon running as root: network-manager. A front-end: nmcli and nmtui (enclosed in package network-manager), nm-tray, network-manager-gnome (nm-applet), plasma-nm. Additionally, there are various plugins available that enable NetworkManager to handle other, special connections like different types of VPN connections. Aug 21, 2019 · If you are connected to a WiFi network click on the “Wi-FI” tab. Otherwise, if you have a wired connection click on the “Network” tab. Select the connection for which you want to set the DNS nameservers and click on the cog icon to open the Network Manager. Select the IPv4 Settings tab. Apr 17, 2020 · In the command, remember to change ADAPTER-NAME with the name of your network adapter you identified on step No. 4, and change X.X.X.X with the IP address of the DNS server that you want to use. Description. NetworkManager.conf is the configuration file for NetworkManager. It is used to set up various aspects of NetworkManager's behavior. The location of the main file and configuration directories may be changed through use of the --config, --config-dir, --system-config-dir, and --intern-config argument for NetworkManager, respectively. This option is helpful if you want to keep one of your Network Solutions services active, for example, your email inbox, and host your other service, such as your website, with another provider. To use the Advanced DNS Manager, your Domain Name Servers must be moved to the Network Solutions managed Name Servers. The Domain Naming System (DNS) is a global naming system used to keep track of computers connected to the Internet. Each type of organization (educational, governmental, commercial, etc) is assigned a domain with the appropriate suffix. The organization at the top of the system is ICANN. The domain Hi, most of the Linux Distributions uses the NetworkManager to configure network connections. Sometimes it is necessary to set a static IP address, i.e. if no DHCP Server is available on the network or you want to setup a peer to peer connection between the computers. List connections And set IP Address, DNS search domain, DNS server and default gateway Its also possible to ed - adblock on safari mac - boosting internet connection - bit torrents - pandora music station free - netgear app for android - match live streaming football - connexion par défaut de la passerelle comcast - how to set a router as a repeater - comment débloquer les hangouts google - how to extract files in android - 250 pokemon list - vitesse lente de vyprvpn - how to get vuze to download faster - applications sans internet android
OPCFW_CODE
What does (MM) mean in the iPad model lineup? While looking at iPad models, I notice several showing as (MM). In particular, the 4th generation iPad mini with Cellular has models A1454 and A1455 with the latter being labeled (MM). The same exists for A1459/A1460 on the larger iPad with a Retina display. What does MM represent? I do not have a direct link or citation on/about Apple to prove my conclusion. But the most logical answer I could gather by reading several sites is that MM would stand for "Multi-mode", signifying that the iPad supports not just CDMA and GSM, but also multiple LTE implementations. The term "multimode" or "multi-mode" is used to designate chips/devices that support multiple, disparate communication technologies (a device with WiFi and Bluetooth can be called multimode too, but we're talking only about cellular here. Plus, all of Apple's iDevices have had WiFi and Bluetooth in all models). Only the GSM + CDMA + LTE models that support multiple carriers have the "MM" tag. Example: The 4th generation iPad and the iPad mini have models that support CDMA/GSM/LTE on Verizon and Sprint. So they're marked as "Cellular (MM)" to give them a more substantial (and carrier neutral) name. The GSM + CDMA + LTE models in older generations that support only Verizon have a "VZ" tag instead of the "MM" tag. Example: The 3rd generation iPad, the first iPad to come with support for both CDMA and GSM on a single device, has a CDMA/GSM/LTE Verizon model, but it does not support Sprint. So it's marked as "Cellular (VZ)" to signify that it's tied to Verizon (and cannot work with another CDMA carrier like Sprint). In Apple's current terminology: "Cellular" implies GSM+LTE "Cellular (MM)" implies GSM+CDMA+LTE (across CDMA carriers) Note that there is no "CDMA only" version of the iPad after the iPad 2, unlike "GSM only" versions that have been and continue to be available. You can come to similar conclusions by comparing the cellular support across iPad models/generations and how they relate to the chipsets used, the carriers supported, and their technology implementations. Here are some links related to the iPad specs across generations and multi-mode (as it applies to cellular technology) that would help understand my conclusion better. Technical specifications for iPads of different generations (pay attention to the technologies/carriers supported): 1. iPad (4th Generation) - Technical Specifications 2. iPad mini - Technical Specifications 2. iPad (3rd Generation) - Technical Specifications 3. iPad 2 - Technical Specifications Links related to multi-mode (read them completely or search for "mode" to go to multimode related snippets on these pages): 1. Multi-band and multi-mode phones, Multi-mode and multi-band mobile phones 2. Cell phone bands and modes 3. Qualcomm Fifth Generation Gobi Platform 4. A rare look inside an LTE cell site 5. Sprint - multimodal hardware rollout 6. Intel previews multimode LTE chips I think it may be Mobility Management, which is a part of a cellular protocol. I'm not familiar with it, but here's the Wikipedia page. I'm an engineer dealing with cellular, so I'm actually gonna read this myself. Great tip - +1 (I figure someone here will eventually nail down the acronym or we'll have entertaining guesses. Wither way, win!) That link makes MM look like part of a GSM spec, which could make sense to call it that over the CDMA version of iOS devices. It's the CDMA and GSM (dual mode) devices that Apple calls as MM. So the MM that Apple uses cannot be a GSM specific aspect. In the iPhone 6 lineup, Apple uses these two terminologies in its GSX tech support website: MM: Multi-mode: i.e.: GSM + CDMA + FDD-LTE MM-TD: Multi-mode: i.e.: GSM + CDMA + FDD-LTE + TD-LTE EXAMPLES: • IPHONE 6,MM-TD,128GB,GRAY A1586 • IPHONE 6,MM,128GB,GRAY A1549 FDD-LTE (Frequency Division LTE) is the LTE technology used most widely. TD-LTE (Time Division LTE) is now being implemented in countries such as China. The MM or Millenium Media according to what I can find, refers to the CDMA variation of the Cellular iPad for Verizon Networks. The non MM version is for GSM carriers like AT&T. Source: https://discussions.apple.com/thread/5093889?start=0&tstart=0 Millennial media. It is just a media platform. Is there anything else you can add… ? According to this page: http://support.apple.com/kb/ht5452#ipad4 the MM designation occurs on: Model A1460: iPad (4th generation) Wi-Fi + Cellular (MM) but not on: Model A1459: iPad (4th generation) Wi-Fi + Cellular And, according to this page: http://support.apple.com/kb/sp662 the differences are as follows: Model A1459 GSM/EDGE (850, 900, 1800, 1900 MHz) UMTS/HSPA+/DC-HSDPA (850, 900, 1900, 2100 MHz) LTE (Bands 4 and 17) Model A1460* CDMA EV-DO Rev. A and Rev. B (800, 1900, 2100 MHz) GSM/EDGE (850, 900, 1800, 1900 MHz) UMTS/HSPA+/DC-HSDPA (850, 900, 1900, 2100 MHz) LTE (Bands 1, 3, 5, 13, 25) 4g available in This Model A /1460
STACK_EXCHANGE
Uranuim is not a manjaro package and 4.2.0-1 is already in unstable branch. Uranium is packaged by Philm, Bertie is not wrong @philm can we switch back to the Arch package? Uranium originally added following this post Cura fix PKGBUILD The current uranium is: $ pacman -Qi uranium Name : uranium Version : 4.1.0-2.1 Description : A Python framework for building Desktop applications. Architecture : any URL : https://github.com/Ultimaker/Uranium Licenses : LGPL Groups : None Provides : None Depends On : python qt5-quickcontrols qt5-quickcontrols2 pyqt5-common python-pyqt5 python-numpy arcus python-shapely Optional Deps : None Required By : cura Optional For : None Conflicts With : None Replaces : None Installed Size : 2.59 MiB Packager : Philip Mueller <email@example.com> Build Date : Fri 26 Jul 2019 16:29:37 SAST Install Date : Fri 02 Aug 2019 09:55:35 SAST Install Reason : Installed as a dependency for another package Install Script : No Validated By : Signature O.k maybe because I am on the unstable branch and on the unstable branch there is the package of the arch (uranium 4.2.0-1). I removed our overlay also in stable now. Let me know if that fixes it for you. After a quick test, Cura now seems to work with the new uranium 4.2 in stable. when lightdm-settings is installed its not possible to use slick-greeter removing lightdm-settings allows slick-greeter to load I don't know what you mean. I have both installed here just fine! What are your versions installed? $ pacman -Q lightdm-slick-greeter lightdm-settings Are you going to update or drop it since is not needed by Discord anymore? Same here. And working as it should ... Hm. Is maybe your config screwed up? You could try moving or removing /etc/lightdm/slick-greeter.conf and then reinstalling slick greeter... ? Bash is now at patch level 9, Manjaro is still shipping patch level 7. @jonathon (Only pinging you mate as you last edited the gitlab repo for it) Also in dot.bashrc at the very end.... # better yaourt colors export YAOURT_COLORS="nb=1:pkg=1:ver=1;32:lver=1;45:installed=1;42:grp=1;34:od=1;41;5:votes=1;44:dsc=0:other=1;35" All that can be removed. Thanks for the info. Update to the new version timeshift 19.08.1-1 in unstable is just done. prepare function is no longer necessary.
OPCFW_CODE
I have questions about xsl:number. This is the most poorly specified instruction I've come across. It's really hard to even know what questions to ask. The way I interpret the XSLT 1.0 spec (and the 2.0 draft doesn't help), <xsl:number format="A"/> must be supported, and it must produce something from the sequence A, B, C, ..., Z, AA, AB, AC, ... where A=1, B=2, etc. The way it is specified, it seems to indicate that the alphabet must be the English alphabet: ABCDEFGHIJKLMNOPQRSTUVWXYZ. Or perhaps it could be any alphabet that starts with ABC and ends with Z, like the Spanish alphabet, which varies depending on who you ask, but for computing purposes I think is generally ABCDEFGHIJKLMNÑOPQRSTUVWXYZ. Or perhaps everything after "A" is just an example, meaning that it very well could be the Swedish alphabet: ABCDEFGHIJKLMNOPQRSTUVWXYZÅÄÖ ... or perhaps Vietnamese, which starts with A and has no Z. Anyway, the implication is that a processor must support some alphabet that contains "A". Or is "A" just a placeholder for any alphabetic character? "When numbering with an alphabetic sequence, the lang attribute specifies which language's alphabet is to be used; it has the same range of values as xml:lang [XML]; if no lang value is specified, the language should be determined from the system environment." It seems to me that if format="A", then the value of lang, whether determined by the processor or specified in the stylesheet, must be a language that contains "A". What happens if the processor supports both English and Hebrew, and I do something like <xsl:number format="A" lang="he"/> ? Or for that matter, <!-- #1488 = Hebrew letter Aleph --> <xsl:number format="א" lang="en"/> ? What does <xsl:number format="B"/> mean? At the very least, I know "B" must represent 1. If the default language is English, does this mean the sequence must be B, C, D, ..., Z, BB, BC, BD, ... ? The spec also says format="I" must be supported by using Roman numerals. What does format="I" mean when the language is not English? The spec says "In many languages there are two commonly used numbering sequences that use letters. One numbering sequence assigns numeric values to letters in alphabetic sequence, and the other assigns numeric values to each letter in some other manner traditional in that language. In English, these would correspond to the numbering sequences specified by the format tokens a and i." This seems to indicate that using "I" for Roman is a "traditional" English convention, and (reading further) that I could use letter-value="alphabetic" to override this interpretation. If my theory about format="B" is correct, then format="I" with letter-value="alphabetic" would result in I, J, K, ... sequences. I don't know. I have more questions, but I'll just stop here. I really hope this stuff gets cleared up in 2.0, although that doesn't help me much in trying to properly implement 1.0. Mike -- Mike J. Brown | http://skew.org/~mike/resume/ Denver, CO, USA | http://skew.org/xml/ XSL-List info and archive: http://www.mulberrytech.com/xsl/xsl-list PURCHASE STYLUS STUDIO ONLINE TODAY! Purchasing Stylus Studio from our online shop is Easy, Secure and Value Priced! Download The World's Best XML IDE! Accelerate XML development with our award-winning XML IDE - Download a free trial today! Subscribe in XML format
OPCFW_CODE
from dataclasses import dataclass from typing import * # NOQA from rpy2 import robjects from rpy2.robjects import r from rpy2.robjects.packages import importr from ._base import STSBasedAlgorithm surveillance = importr("surveillance") @dataclass class Cusum(STSBasedAlgorithm): r"""The Cusum model. Attributes ---------- reference_value decision_boundary expected_numbers_method How to determine the expected number of cases – the following arguments are possible: {"glm", "mean"}. ``mean`` Use the mean of all data points passed to ``fit``. ``glm`` Fit a glm to the data ponts passed to ``fit``. transform One of the following transformations (warning: Anscombe and NegBin transformations are experimental) - standard standardized variables z1 (based on asymptotic normality) - This is the default. - rossi standardized variables z3 as proposed by Rossi - anscombe anscombe residuals – experimental - anscombe2nd anscombe residuals as in Pierce and Schafer (1986) based on 2nd order approximation of E(X) – experimental - pearsonNegBin compute Pearson residuals for NegBin – experimental - anscombeNegBin anscombe residuals for NegBin – experimental - ``"none"`` no transformation negbin_alpha Parameter of the negative binomial distribution, such that the variance is :math:`m + α \cdot m2`. References ---------- .. [1] G. Rossi, L. Lampugnani and M. Marchi (1999), An approximate CUSUM procedure for surveillance of health events, Statistics in Medicine, 18, 2111–2122 .. [2] D. A. Pierce and D. W. Schafer (1986), Residuals in Generalized Linear Models, Journal of the American Statistical Association, 81, 977–986 """ reference_value: float = 1.04 decision_boundary: float = 2.26 expected_numbers_method: str = "mean" transform: str = "standard" negbin_alpha: float = 0.1 def _call_surveillance_algo(self, sts, detection_range): control = r.list( range=detection_range, k=self.reference_value, h=self.decision_boundary, m=robjects.NULL if self.expected_numbers_method == "mean" else self.expected_numbers_method, trans=self.transform, alpha=self.negbin_alpha, ) surv = surveillance.cusum(sts, control=control) return surv
STACK_EDU
A ``worst case'' estimate of the minimum number of trials Mt required for applying the test in Eq. 6 is given by Purgathofer . Scaling the values so that samples range from 0 to 1, the number of trials required is: For an image anti-aliasing problem with samples that ranged in value from 0 to 255, Purgathofer found that useful results were obtained when an interval of D = +/- 13 (i.e. d = 13/255=.05 in Eq. 8) was allowed with a confidence of 80%. These values of d and alpha give a minimum sampling of 32 trials/pixel. The sampling rates required when computing radiance in ``real world'' floating point values and subsequently mapping to the device with a tone operator are much higher. While the form of tone operators varies, a typical radiance value on the order of 0.001 times the light source radiance is mapped in Eq. 3 to a value Np on the order of 100. An interval on the order of +/- 10 in the final display then requires an interval d equal to 0.0001 times the light source radiance. Scaling the problem so that the light source radiance is 1, a value of d = 0.0001 with a confidence of 80 %in Eq. 8 gives a minimum sampling rate of 16,094 trials per pixel! The reorganization of the equation of transport given in Eq. 7 reduces the variance so that for most pixels in an image this worst case does not occur. However, there will be small regions in the image in which the ``worst case'' is encountered. These isolated regions will be noisy because they are undersampled. We summarize these high variance cases in Fig. 1. Figure 1(a) illustrates the first type of high variance integration. Both light sources and non-light sources can be visible through some pixels. Sampling Eq. 2 for these pixels is essentially sampling a binomial distribution with several orders of magnitude in the two alternatives. Convergence is extremely slow in such cases. Figure 1(b) illustrates a second high variance case - the integration of direct illumination for a diffuse-like surface (first integral on the right of Eq. 7). For a point which has a full view of the light source, a small number of trials are needed to estimate the cosine and distance terms in the integral. However, when the view of a light source is partially obscured a large number of trials may be required to estimate the visible area. The smaller the fraction of the source that is visible, the larger the number of trials needed. Since a light source has a high radiance, just one ``hit'' will result in a large sample standard deviation. Figure 1(c) illustrates the integration of reflected light for a specular-like surface, the third integral on the right of Eq. 7. The BRDF is concentrated on a small lobe near the specular direction. A high variance in samples for this case occurs when a small portion of this lobe is subtended by a light source. The fourth type of integration with high deviations is the case of ``caustic paths'', shown in Fig. 1(d). The integral diagrammed in Figure 1(d) is the second term on the right of Eq. 7. A ``caustic'' appears when a small portion of the incident hemisphere is subtended by the image of a light source in a specular-like reflection. When a sample ray hits this small image, a large deviation in the sample is introduced. In any of the four cases, high deviations are expected in some region of the scene, not at one point. Light source edges, penumbrae, fuzzy specular reflections and caustics spread through regions. For Q pixels in one of these regions, only a small number of pixels q will have obtained samples hitting the high radiance portion of the domain of integration. The q pixels will appear to be adding noise to the region of Q-q pixels that appear to be accurate. Actually, all Q pixels are equally valid. Rather than throwing out the q ``noise'' pixels, the true value that should have been calculated for the whole region should be an average of all Q values.
OPCFW_CODE
|This article does not cite any sources. (September 2016) (Learn how and when to remove this template message)| 1.6.5 / August 13, 2016 |Operating system||Linux, OS X, Microsoft Windows| |Type||level design tool| GtkRadiant is a level design program developed by id Software and Loki Software. It is used to create maps for a number of video games. It is maintained by id Software together with a number of volunteers. GtkRadiant's roots lie in id Software's in-house tools. Some of the early UI design decisions influencing it could be seen in QuakeEd, the original Quake mapping tool for NeXTSTEP. The first direct code ancestor however was QE4, the in-house Quake II level editor id Software used to build Quake II levels and later made available with the Quake II SDK. Robert Duffy used the released QE4 source code to develop QERadiant which became a very popular editor. id Software took the code in-house again to develop Q3Radiant, the Quake III Arena level design tool. All these tools were Windows-only applications. GtkRadiant was released in 2001 as a modification of Q3Radiant introducing two major changes: It used the GTK+ toolkit so that it could also support Linux and Mac OS X, and it was also game engine-independent, with functionality for new games added as game packs. Timothee Besset, who became responsible for the id Software's post Quake III Linux ports and much of the network programming, was hired to maintain the game editor. GtkRadiant is free software distributed under GNU General Public License. For a long time, the application source code was publicly available from id Software's Subversion repository, and it was in a dual license where new code was under GPL-compatible free software licenses and the core Q3Radiant code was under id Software's proprietary license, primarily because it used parts of Quake III Arena code. This dual-license system made development difficult, and inhibited use of the editor in commercial projects. On August 19, 2005, Quake III Arena source code was released along with the Q3Radiant source code. The license for both the GtkRadiant editor and toolset (notably Q3Map2, the BSP compiler) was changed in February 2006, and publicly released under the GPL on February 17. ZeroRadiant (or GTKRadiant 1.6.0) is an upcoming version of the GTKRadiant level editor based upon the 1.4.0 architecture and design. It is currently in development for new id Games projects. It will be used to create maps for a number of computer games. It is maintained by id Software together with a number of volunteers. - CodeRED: Alien Arena- Uses a specialized version called AARadiant. - Doom 3- A Windows-only variant called D3Radiant (based on Q3Radiant, not GtkRadiant) is integrated into Doom 3. GtkRadiant 1.5.x can be used to make Doom 3 maps in Linux, by utilizing Doom 3's integrated map compiler in conjunction. - Heretic II - Quake II - Quake III Arena - Quake Live - Quake 4- Being based on the Doom 3 engine, it also uses a version of D3Radiant internally, called Q4Radiant. However, GtkRadiant 1.5 can still be used to create maps on Linux. - Return To Castle Wolfenstein - Wolfenstein: Enemy Territory - Soldier of Fortune II: Double Helix - Star Trek: Voyager Elite Force - Star Wars Jedi Knight II: Jedi Outcast - Star Wars Jedi Knight: Jedi Academy - UFO: Alien Invasion - Urban Terror Support has previously existed for the following: In addition, the following games and projects use GtkRadiant as a map editor, by using the GtkRadiant Quake III Arena game pack and an external map compiler or converter: Custom game packs exist for these games: The following games use modified versions of GtkRadiant as a community map editor in combination with a series of other tools available in their editing kits: The following games use modified versions of GtkRadiant, but do not have a map editor available for the community. |Wikibooks has more on the topic of: GtkRadiant| |Wikibooks has a book on the topic of: Q3Map2|
OPCFW_CODE
Up until now, I have been using Visual Paradigm for modeling. Visual paradigm support XMI for import/export but only version 1.0, 1.2, and 2.1 of XMI. Steve opened an issue with compatibility between the Visual paradigm XMI and 2 other tools in the SPDX spec git repo: https://github.com/spdx/spdx-spec/issues/164 I don’t have any opinion on the model data interchange standard as long as it is a standard that supports UML modeling and there is a reasonable choice of tools which support the standard. For 3.0, I am open to switching tools to GenMyModel.com it if simplifies the import/export issues. It looks like GenMyModel supports import of XMI version 1.1 and 2.0 (note the non-intersecting versions compared to Visual Paradigm). As an aside, I find it interesting that as a standards group attempting to create a unified SBOM standard, we’re having trouble finding a standard interchange format for our UML model – a standard that has long been established. Hopefully, we’ll do better at creating interchange data formats between tools 😉 From: Kay Williams <email@example.com> Sent: Sunday, December 15, 2019 7:54 PM Subject: FW: [cdfoundation/sig-security-sbom] Support schema definitions (#6) Hey everyone, we’ve been having a discussion on GitHub about the best way to collaborate on modifications to the SBOM model. Currently we are modeling via XMI. This appears to have limitations based on tools available. An alternative is to model in XSD. Can folks weigh in with pros/cons/alternatives? Feel free to comment by replying either to this alias, or directly to the GitHub issue. From: Steve Springett <firstname.lastname@example.org> Sent: Sunday, December 15, 2019 7:34 PM To: cdfoundation/sig-security-sbom <email@example.com> Cc: Kay Williams <Kay@thewilliams.net>; Assign <firstname.lastname@example.org> Subject: Re: [cdfoundation/sig-security-sbom] Support schema definitions (#6) The XMI that was shared cannot be opened in StarUML, but can be opened in GenMyModel. So it appears that in order to look at the model, we'll always need to be online, and we'll always need to use this single tool. The exported XMI does not contain a diagram. So there's no way to visualize the relationships between the various objects without having to resort to images. This is so frustrating. Seriously, if we were modeling in XSD, anyone could use virtually any XML authoring tool and visualize the model while they author it. You are receiving this because you were assigned. Reply to this email directly, view it on GitHub, or unsubscribe.
OPCFW_CODE
[Dbix-class] [OT][ANNOUNCE] SQL Generation with SQL::DB nomad at null.net Fri Sep 7 09:22:29 GMT 2007 On Fri Sep 07, 2007 at 09:09:28AM +0200, Emanuele Zeppieri wrote: > Mark Lawrence wrote: > >I'm not sure why people keep thinking SQL::DB requires more string > >manipulation than SQL::Abstract. Going back to one of the examples > >I first posted: > > $schema->query( > > select => [$track->title, $cd->year], > > from => [$track, $cd], > > distinct => 1, > > where => ($track->length > 248) & ! ($cd->year < 1997), > > union => $query2, > > ); > >The arguments to the query() method is a LIST of keyword / value > >pairs (also a data structure). The keywords are not used as strings > >in the generated SQL, but are commands to SQL::DB about what type > >of data structure follows. > > my @query = ( > > construct_a => $data, > > construct_b => $data, > > construct_c => $data, > > ); > >It is in fact the same principle as SQL::Abstract but with a > >different syntax. Nothing more, nothing less. > By the above example, it does not seem so. Not at all. > The most important thing is the WHERE clause, and it seems that it's > implemented by a Perl expression: so how do you suppose to manipulate > such an expression *by code* other than by string operations? > (Forget the rest, just show how do you build a complex where clause by > code without string concatenations and such.) I'm not quite sure I understand what you are asking. Do we agree that at the final output stage, the various values must be concatenated together to produce an SQL statement? This is the basic internal workings of whatever abstraction tool is being used. If that is so, then you are asking how the following is evaluated? ($track->length > 248) & ! ($cd->year < 1997) By the rules of precedence the items within the brackets are looked at first (by Perl). ($track->length > 248) $track->length is an object based on SQL::DB::Expr. Since we have overloaded the ">" operator to the "gt" method, what Perl does is call The return value from this is another SQL::DB::Expr object that has a string value "track.length > ?" and a bound parameter "248". The same process happens for the other bracketed expression. So what we have is $expr1 & ! $expr2 I think "!" has higher precedence, and "!" is overloaded to SQL::DB::Expr::not so Perl calls: Which returns another SQL::DB::Expr object ($expr3) with a string value of "NOT (cd.year < ?)" and a bound value of "1997". So what we now have $expr1 & $expr3 "&" has been overloaded by an "and" method so Perl calls: Which returns yet another Expr object with a string value of "(track.length > ?) AND NOT (cd.year < ?)" and two bind values of There is certainly some string manipulation to produce the string values, but the evaluation of the expression is all Perl logic. The finer details are inside SQL::DB::Expr. My apologies if I haven't answered your question. More information about the DBIx-Class
OPCFW_CODE
The RPC is a MIDI controller and implements the age-old 1980s serial MIDI bus, and the Protosynth must be capable of interfacing with it. But in general use a USB device port is much more practical than the three ubiquitous MIDI ports; it allows connecting the synth directly to a laptop or other computer at any time. And USB host ports are starting to become popular in sequencers also. So that means I need an interface module that has both, and that can talk to the main synthesizer board over two serial buses. The mainboard processor must take care of interleaving the commands received over the two buses. Standard MIDI ports are no problem, I had the basic circuitry working with the RPC and could simply lift it from there. The synth’s LPC1678 microcontroller has several UARTs, and it’s very easy to configure them according to the MIDI standard. USB was going to take some more work though. I didn’t have enough space on the DSP mainboard to make use of the LPC’s built-in USB controller, so I decided to keep it that way and went for an off-board solution. My approach was to build an external board that could speak MIDI over USB, and pass the data stream over an UART to the mainboard. Luckily USB-capable microcontrollers are a dime a dozen these days, so it’s only a matter of choosing a suitable one for the purpose. A little bit of searching found me MocoLUFA, which is a complete implementation of a standard MIDI USB device for the ATmega, created by morecat_lab. The ATmega8u2 it was intended to be used with looked very nice, and its bigger brother ATmega32u2, while pin compatible, would have plenty of capacity for future expansions if necessary. The Arduino Uno is one example of the use of these chips; the Arduino team very successfully replaced the previous FTDI USB controller with an ATmega8u2 to reduce the cost of their boards. With inspiration from the Uno schematic and from the ATmega32u2 datasheet, I drew up the first version of the MIDI board. The design was straight-forward. I used the 4N28 optocoupler and through-hole components for the MIDI side because I had those at hand. Except for the bypass capacitors. I have a lot of 0603 size 0.1uF caps now! 🙂 In the default configuration (F1 unpopulated in rev2 schematic) the device is self-powered: it will NOT draw its power off the USB bus. That means the it will disconnect from the computer when the it is powered off, as expected for a stand-alone device. It requires an external 3.3V power supply, and the signal voltage for all communication to and from the mainboard is 3.3V. The MIDI DIN5 connectors are very close to each other in the first revision of the board; while there are some DIN jacks that should fit, I dragged them a little bit apart for the second revision. Almost all of the GPIO pins on the ATmega are wired to am extra pin header next to it. Four are connected to SMD LEDs. The ATmega8u2 does not have a dedicated VBUS pin for detecting whether it is connected to a USB host or not, and the MocoLUFA software can be configured to use some other GPIO pin for this purpose. In the end I left VBUS unconnected, because I wasn’t sure if the ATmega could handle overvoltage on the pins. The data sheet indicates this should be avoided. The USB firmware still works, albeit not completely according to the standard. The two serial buses connecting the interfaces to the mainboard run at different bitrates. The real MIDI interface must be configured at 31250 bps. The USB interface doesn’t seem to have any such limitation, so I set it to 125 kbps. Just because it was a nice round number with the ATmega running by a 8 MHz clock. I’m not yet sure what kind of flow control the USB bus is capable of… For programming the ATmega you will need an AVR programmer that can be hooked to the 6-pin ICSP port. I used a 5V Arduino Mega 2560 with the ArduinoISP firmware installed on it. The supply voltage of the Arduino is not an issue as long as you keep the MIDI board otherwise completely disconnected during programming. The MIDI ports are designed for a 3.3V power supply, so if they are powered by 5V they may fry any connected MIDI devices! During programming the board will draw power through the ISP port. Here is the schematic, click for a larger image: I will add the gerber files and BOM here later. Drop me an email if you’re interested in them or the Eagle source files (see bottom of page for the address). I have several rev1 PCBs left, and will most likely solder together at least a few more if all goes well. There are many possible interesting configurations of the board. For example, if the ATmega32u2 is sufficient for your application all by itself, the MIDI ports could be connected directly to it… How about a stand-alone arpeggiator board? For more detail: USB MIDI interface
OPCFW_CODE
Join Online courses for free during Corona Break Increasing your productivity while you have nothing to do during Corona break. Spending enough hours in the day to without of your tasks or goals can be difficult. Sometimes the day has slipped away and you have accomplished little, and you don’t know why. It is also frustrating when you know that part of the reason you cannot get anything done is because of a lockdown or because you cannot utilize your time well. Invest some time in one or more of these courses to help increase your productivity. - Advanced Hydraulics “This course on “Advanced Hydraulics” describes the flows and properties in open channels. A total of 41 lectures are devised for this course. After attending this course, a student will be able to describe the various types of flows in open channels, the velocity distribution across and along the channel, hydraulic jumps, and turbines and pumps. The student will be able to design the channel sections and drains, jumps, and pumps for various hydraulic and hydrologic projects.” Join Course: https://bit.ly/2JlvcqB - Data Science: Productivity Tools Data Science, this course explains how to use Unix/Linux as a tool for managing files and directories on your computer and how to keep the file system organized. You will be introduced to the version control systems git, a powerful tool for keeping track of changes in your scripts and reports. We also introduce you to GitHub and demonstrate how you can use this service to keep your work in a repository that facilitates collaborations. Join Course: https://bit.ly/2w0Adls - Design of Steel Structures “This Course covers Introduction: Properties of Structural Steel, I. S. Rolled Sections, I. S. Specifications. Design Approach: Factor of Safety, Permissible and Working Stresses, Elastic Method, Plastic Method, Introduction to the Limit States of Design. Connections: Type of Connections, Riveted, Bolted and Welded Connections, Strength, Efficiency and Design of Joints, Modes of Failure of a Riveted Joint, Advantages and Disadvantages of Welded Joints, Design of Fillet and Butt Welds, Design of Eccentric Connections. Tension Members: Net Sectional Area, Permissible Stress, Design of Axially Loaded Tension Member, Design of Member Subjected to Axial Tension and Bending.” Join Course: https://bit.ly/3bxOFQR - Dynamics and Controls This is an interactive course about the basic concepts of Systems, Control and their impact on all human activities. First, the basic concepts of systems, dynamics, structure, and control are introduced. Then, looking at many examples in Nature and human-made devices, we will realize that the dynamic behavior of most systems can be modified by adding a control system. Later we will see how knowing how to evaluate the dynamic behavior of a system and measure its performance will provide the tools to design new controlled systems fulfilling some requirements. Join Course: https://bit.ly/2xt8L07 - Entrepreneurship in Emerging Economies The focus of this course is on individual agency—what can you do to address a defined problem? While we will use the lens of health to explore entrepreneurial opportunities, you will learn how both problems and solutions are inevitable of a multi-disciplinary nature, and we will draw on a range of sectors and fields of study. Join Course: https://bit.ly/2UqmE8b - Exploring Sustainable Development Have you ever wondered why there is such a focus on Sustainable Development in the public debate today? In this course, you will be introduced to the concept of sustainable development and how it has evolved to what it currently means, a holistic ongoing process to tackle the environmental, social and economic challenges of human development. Sustainable development is more important than ever, not the least, for problem solvers aimed at creating sustainable solutions at the workplace and in society. Join Course: https://bit.ly/2y9JHvc - Geotechnical Measurements & Explorations The course prepares the student (graduate level) to be able to make effective learning of soil exploration, in-situ tests, and interpretation of test results in the design of foundation and soil-structure interaction problems. The first module is on index property tests, consolidation tests, direct shear tests, etc and is a general overview. The remaining modules, dealing with triaxial (static and cyclic) and simple shear testing under stress and strain control with pore pressure measurements, subsurface exploration, planning, drilling and sampling techniques and in situ field tests, relevant theoretical concepts and data interpretations for determination of engineering properties of soils, and their application to geotechnical design are presented in such a manner that readers who are unfamiliar with the subject will not face any serious problems in understanding. Join Course: https://bit.ly/2QSL8Vu - People Management Skills “Naturally, the journey to becoming a line manager can be challenging and new managers are often left feeling overwhelmed. You’ll learn about best practice processes in recruitment and induction, and identify market trends that will impact your organization. You’ll also discover how to develop teams and individuals, and manage good workplace performance.” Join Course: https://bit.ly/33Ty5Iu - Python Programming “The fundamental design cycle of computer science and computer programming: writing code, executing it, interpreting the results, and revising the code based on the outcomes. Usage of the fundamental atoms of programming: variables, mathematical operators, logical operators, and boolean arithmetic. Control structures for developing dynamic programs: conditionals, loops, functions, and error handling.” Join Course: https://bit.ly/2UpQEB1 - Research Methods: An Engineering Approach This course is designed for engineering students conducting postgraduate research work on engineering projects. The objective of the course is to translate current research methods, which are mostly from a social science perspective, into something more relatable and understandable to engineers. Our hope for this course is to go beyond the concepts to understand the actual reasons for doing research in a certain way. While engineers are the main target audience, non-engineers will find this information useful as well. Join Course: https://bit.ly/39ol6jq - Robot Development This course opens an in-depth discussion and creates a better understanding of the field of developmental cognitive robotics. This field takes direct inspiration from child psychology theories and findings to develop sensorimotor and cognitive skills in robots. Join Course: https://bit.ly/33UIQKP - Six Sigma and Lean: Quantitative Tools for Quality and Productivity “Thinking Six Sigma & Lean” is deployed at all production sites and a cornerstone of our global quality excellence program to serve the automotive and security industries. The edX Professional Program offered by the Technical University of Munich Six Sigma and Lean: Quantitative Tools for Quality and Productivity is one of the best programs available for developing quality professionals that know how to solve problems and can apply the Six Sigma and Lean toolset. The program provides solid Six Sigma and Lean fundamentals up to a Green Belt level, reinforced with practice and test questions and provides real depth on how these tools are applied in industry, using case studies and real examples from participants. Join Course: https://bit.ly/2UO2a8e
OPCFW_CODE
Apply Design Thinking to your Quality Process and Take Your Product to the Next Level Innovative startups reach a point in their growth where they must incorporate formal quality processes into development. The trick is to do so in a way that feels like it adds value to the product, not unnecessary friction to the development process. By utilizing Design Thinking as a basis for setting the Quality culture, companies have an opportunity to transform their development process while creating a culture of true continuous improvement. You’ve probably heard of Design Thinking if you read Fast Company, Wired, or any of a dozen industry magazines that talk up hot business trends – and if not, check out this article from Wired–Why You Are Design Thinking’s Holy Grail. Design Thinking is a problem-solving methodology made famous by Stanford University’s Design School. It breaks problem-solving down into the following steps: Empathize, Define, Ideate, Prototype, Test…. and then repeat as necessary. Design Thinking has been shown to yield innovative and successful products that have elevated some very well-known companies to iconic success. As author Andrew Reid mentions in his above referenced Wired article, “[Design Thinking] sits right up there with agile software development, business process management, customer relationship management, and so on. It’s a real business term and a practice that supports successful product development.” So what does Design Thinking have to do with Quality? Quality processes typically involve identification of a problem, root cause analysis, immediate containment, and corrective and preventive actions. Picture Empathizing and Defining as key tools in defining and investigating your quality issue. While Ideation, Prototyping, and Testing are critical components in the development of corrective and preventive actions (CAPA) and you will see immediate parallels. Design Thinking is all about “Failing Forward” – which is really just shorthand for always learning from user feedback and mistakes to iterate and build again, but better. And this is exactly what you want in a healthy Quality organization. Design Thinking can set the tone for both Development and Quality, as your team empathizes with users through observation and feedback early in the design cycle to build the best possible first prototype, and then expects to learn and iterate through testing. Quality also uncovers issues and feeds more data into the development loop as a valuable part of the process, not as useless overhead. The Design Thinking approach makes the discovery of failures part of a larger creative effort and reduces defensive mental blocks. Broad ideation and creative investigation of this feedback then leads to innovative solutions. Think about companies like Apple, Square, Airbnb, Nest, and Tesla – these companies connect with customer needs and factor in feedback throughout the development process to disrupt and dominate in their chosen markets. As the team adopts Design Thinking into processes over time, they are able to refer to past lessons learned to inform new successful products that meet users’ needs. And this is where the documentation of feedback, collaboration and ideation in a Quality system can be extremely helpful. By producing quality documentation in a culture of design thinking, and storing it in a way that engineers, operations, manufacturing, and quality can all access and contribute, it is useful over the long term. All the great empathy, observation, innovation, and testing can be used to drive process improvements and product leaps that will lead to greater business success. For more information about Design Thinking, a great resource is the d.school bootcamp bootleg. Check it out, and see how incorporating Design Thinking into your quality management process might take your company to the next level.
OPCFW_CODE
Ever since Apple decided to put Intel processors in their Macs there have been attempts by enthusiasts to run Mac OS X on commodity hardware – with mixed results. The key to installing Mac OS X on a non-Mac computer is using the right hardware. If your hardware is a close to Apple kit as possible, you have the best chance to succeed. The so-called Hackintosh community has come a long way the past few years in making it easy for “normal people” to install Mac OS X on their PC. Since I was tired of using Debian Linux on my Desktop, dual booting to Windows to play the occasional game of World of Warcraft, I decided to give installing Mac OS X a try. But there was a problem. My current PC is powered by an AMD processor (AMD Phenom II X6 1055T, to be precise). Simply put, installing Mac OS X on an AMD cpu is not going to work. So, the first thing I did was go over to the OSx86 Project Hardware Compatibility List and see what hardware is most compatible with Mac OS X 1.8.2 (the most recent version of OS X at this time). I already knew I’d need a shiny new Intel CPU, and thus also a new motherboard. Choosing a CPU Choosing a CPU for your CustoMac is not very difficult, because you are limited to Intel. Since the release of the new MacBooks Apple officially supports the Ivy Bridge architecture, which means about a 50% lower power consumption and a 5-15% speed increase (link). The main candidates were: - Intel i5 3570 - Intel i5 3570K - Intel i7 3770K I won’t go into to too much detail here, but I chose the Intel i5 3570. The i7 offers Hyper Threading at a price bump of about € 100,-. For me, this was not worth the money. Then there’s the choice between the 3570 or the 3570K. The ‘K’ version has less features, but is unlocked, allow it to be easily over-clocked to higher speeds. The price difference between the 3570 and the 3570 are minimal, but I’m not planning to over-clock my CPU, so I went for the slightly cheaper Intel i5 3570 processor. Choosing a motherboard Next came a more difficult decision. The motherboard. Again, the Hardware Compatibility List was a great help here. In the end I chose the Gigabyte Z77-DS3H. The pro’s of this board are that it’s well supported by Mac OS X and the OSx86 community. This board is special because it features Gigabyte’s 3D UEFI Bios. This bios would make it easy to install Mac OS X untouched on your machine. I didn’t end up using this UEFI feature, but nonetheless, support for this board is incredible. The full hardware list So, recycling other parts of my current PC I build the following CustoMac configuration: - Gigabyte Z77-DS3H Motherboard (But at Amazon) - Intel i5 3570 CPU @3.4Ghz (Buy at Amazon) - 16GB RAM at 1333Mhz - 1x 120 GB OCZ Agility 3 SSD - 1x 1TB Western Digital HD - 1x 2TB Western Digital HD - XFX ATI Radeon HD 6870 1GB (Buy at Amazon) Note 1: The above Amazon links are affiliate links. Note 2: I could have upgraded my memory to 1600Mhz units, which would be faster. But I have no use for the old memory, so I chose to re-use it for now. Before you get started, you should prepare an installation USB drive on another Mac. You’ll need at least 8GB of space on the drive. - Buy Mountain Lion from the App Store on you Mac and download the installer. If you already purchased Mountain Lion, re-download it. - Download UniBeast for Mountain Lion and follow step 1 and 2 from this guide. At this point you have a bootable USB drive with the Mountain Lion install on it. To prepare the actual installation, remove any devices you don’t need, like extra hard drivers, DVD/BluRay drives, etc. In my case I also pulled out the ATI 6870 and used the onboard Intel HD Graphics during installation. Plug your USB drive into a USB 2.0 (black, not blue) port on the motherboard. Make sure to use one of the ports on the back of your computer, those are directly attached to the motherboard and have the greatest chance of succes. Now, boot up the computer and enter the BIOS. There are two important changes you need to make. - Set SATA to Then select the USB drive as the bootable device and boot. Booting the installer You’ll see the UniBeast boot screen which show a ‘USB’ option (and possibly other, depending on what’s on your disks). Choose ‘USB’ - but don’t press ENTER just yet. Instead, type -x, which will show up on the screen. Then, press After a few minutes you should have the Mac OS X Installer in front of you. Go ahead, install this baby. Notes on Fusion drive It’s possible to create a CustoMac Fusion Drive using an SSD and regular harddisk. When you’re in the installer, choose ‘Terminal’ to open a terminal window and follow the steps in this fusion drive guide. I was able to create and install Mac OS X on a Fusion Drive without problems. The only knack was that the custom bootloader you need is not Fusion Drive aware, which makes it difficult to use. In the end I decided to not use a Fusion drive setup, and just install everything on the SSD. Completing the installation Now comes the tricky part. The installation is done and you’re CustoMac wants to reboot. Let it, but leave the USB drive connected. Your machine will boot up with the same boot menu as before, but instead of ‘USB’ you should now be able to select ‘Macintosh HD’. Select that, enter -x followed by You’ll now be taken through the final steps of installation, like setting up iCloud and creating a user account. When finished, you should be on your new Mac desktop. Installing custom kexts Now, your CustoMac can only boot with the USB drive. Let’s change that by installing a bootloader and some kernel extensions. - Download MultiBeast for Mountain Lion. - Run the installer and select the following options: ** UserDSDT or DSDT-Free Installation ** Miscellaneous => FakeSMC ** Audio => Realtek ALC8xx => Without DSDT => Latest version for ALC8887/888b. ** Network => maolj’s AtherosL1cEthernet ** Disk => TRIM fix for 1.8.1+ ** Bootloaders => Chimera That’s all. Install that stuff. Now, you should be able to reboot your CustoMac and boot it without the USB drive. If things don’t work out (like a black or white screen, kernel panics, whatever), just plugin your USB drive again, boot from it and select your ‘Macintosh HD’. At this point, you should have a working CustoMac with sound and network working. The only thing missing is a proper graphics card. You’ll need to make some tweaks to the chameleon plist file. Then shutdown your computer and install the graphics card. Note: this works for my XFX ATI Radeon HD 6870 card. There may be subtle differences for different version and brands. Just use the Google to find hints, boot with the USB drive to get to your system and make updates as needed. /Extra/org.chameleon.Boot.plist make sure you have the following entries: <key>AtiConfig</key> <string>Duckweed</string> <key>AtiPorts</key> <string>5</string> <key>Graphics Mode</key> <string>1920x1080x32</string> <key>GraphicsEnabler</key> <string>Yes</string> <key>PciRoot</key> <string>1</string> Depending on which slot you used for your graphics card, you may have to set I also had to add Kernel Flags, but you may or may not need it. Now, reboot one last time and everything should go smoothly. If your system comes up without any troubles, start attaching those others disks and drives you had disconnected during the installation. There shouldn’t be any issues here. Congratulations. You now have a CustoMac! Keep in mind that you should not blindly install any update you see. Installing an update my change the bootloader or change kernel extensions that break your system. A good tip is to create a full disk image of your SSD using a tool like Super Duper. In case of shit hitting the fan after an update, you can easily restore your disk to working order. In my case, I’ve attached an old 500GB drive to store this disk image. It works great. Shiny and fast! Just as a side-note, my CustoMac is blazingly fast. It’s the combination of the fast Ivy Bridge architecture, the i5 processor, and the SSD. I’ve measured boot-up time from pressing the power button to the Mac OS X login screen at about 11 seconds.
OPCFW_CODE
WTF is DIME again? - writing a scanning tool for my HP LaserJet TL;DR: I got pissed with HPLIP not working, then the HP Smart app requiring account registration and reverse engineered the network communication and the HP Smart app to develop a tool called HPSimpleScan written in Go, that can be used to scan (not only) from this printer. In the process I’ve written a Kaitai Struct definition of the long-forgotten DIME format and contributed it to the Kaitai Struct formats repo. As a Linux user, I can safely say that printing on Linux is awesome. It really is. 98% of the time, you just connect the printer somehow, doesn’t matter if through USB or a network, and it just works. No driver installing or anything, thanks to CUPS(or similar) and widely supported PDLs it just works out of the box. Problems may arise once you try to scan, however. Moreover scanning over the network. There are few “widish-ly” supported standards (like eSCL, WSD etc.), but often only new/certain scanners support them and a lot of the time they still need proprietary software. This brings me to my printer. HP LaserJet 100 colorMFP M175nw It’s quite old but has everything you could ever want from a printer/scanner All-in-One. It prints colour, has an ethernet port, ADF, no BS online printing services if you don’t want them - just perfect. Printing was never a problem with this baby ( PCL 6 ftw), but scanning… oh boy… scanning. According to the docs, it supports Twain and Windows Image Acquisition (WIA), both of which are AFAIK USB only. So how do you scan via network? On Windows, you just download the correct driver from the HP website and you’re good to go. On Linux, well… HPLIP is HP’s partly-opensource-partly-proprietary Linux driver for HP devices. That’s great when it works, but sucks bad when it doesn’t. It worked for me for quite a long time, it was always a huge hassle to set up in the beginning, but then it worked okay. Until I reinstalled my laptop in early 2019. After that, I never managed to get it working again. I tried manual discovery, I tried different protocols, I tried different versions, I tried sudo systemctl stop firewalld, I tried different SANE drivers, like sane-airscan - nothing helped. When I needed to scan something, I would spend full evening tickling with HPLIP, then give up at like 2:00 AM and scan either from Windows or the mobile HP Smart app. After like 5-6 of those evenings I gave up completely and used the app right away and then transferred the file to my computer. Which worked flawlessly until… “What if we make them create an account?” I can hear some clever head at HP business strategy meeting ask that question. Now you had to register to use the app, to use YOUR printer. Along with, you guessed it, agreeing with all the possible usage of your juicy personal data in the world. That extremely pissed me. Now I was on, this was personal. And I thought to myself: how hard could it be to write my own driver? Developing my own scanning tool PCAPs, JADX & chill I downloaded an older version of the Android app from apkmirror. Loaded it up in an emulator, fired up Wireshark, started scanning and soon enough: It was a SOAP API, which as always with SOAP was on one side disappointing, but on the other side relieving, because it could have been something much worse (in the 90s-2000s people^W Microsoft experimented with all sorts of things). The request was clearly getting ScannerElements, presumably possible scanner options, and the printer returned them! From the capture, I reconstructed how the app requests and retrieves the scan (in the non-ADF mode): The app also interlaces those requests with GetScannerElements, probably just to check if everything is happening as it should. The GetJobInfo requests and responses looked very clear as well. The RetrieveImageRequestResponse however was weird. It was clearly some binary format combining XML and JPEG into one response? What the heck? This bugged me for a long while, at the time I completely missed the Content-Type: application/dime. So instead I decompiled the app with JADX and to see how it was parsed. After an hour or so I found the code that seemed responsible for saving the image to disk: If we skip the unimportant, it gets the JobID, then feeds it to b function that probably returns an object representing that Job, this object and a filepath ending with /dime_message is fed to function a that returns another object on which is then checked if it is binary, then the file property is somehow iterated over and if the type equals to image/jpeg that part gets saved to a file <CURRENTMILLIS>.jpeg. Now it finally occurred to me to google dime_response and I found out about… Do you know MIME? It is a way of embedding multiple files of different Content-Types into a single file. It was developed in the 90s to be used in email, where you often want to embed text, HTML, pictures or other attachments at the same time into a single text message you then actually send over. In the beginning of the message, a boundary string is specified that divides the separate parts and before each part, there is a header specifying what Content-Type given part is (and how is it encoded etc.). Simple and plaintext: Content-Type: multipart/mixed; boundary=frontier This is a message with multiple parts in MIME format. This is the body of the message. DIME is Microsoft’s early 2000s try at making MIME more efficient by making it binary. For example: instead of having Content-Type: before each type, you just specify it to be at a given offset and of a given length and save space and bandwidth. It never even made it to RFC and it’s still a draft. Okay, so that explains it. The response is multipart: part XML, part JPEG. The JPEG part is big so it’s divided into multiple ones with metadata in between. The question now was, how to get the JPEG out of it. Because DIME is truly an obscure thing, the only parsers I could find were in Perl, Java or PHP. No Golang :( So what to do now? To quote from the official website: Kaitai Struct is a declarative language used to describe various binary data structures, laid out in files or in memory: i.e. binary file formats, network stream packet formats, etc. The main idea is that a particular format is described in Kaitai Struct language (.ksy file) and then can be compiled with ksc into source files in one of the supported programming languages. These modules will include a generated code for a parser that can read the described data structure from a file or stream and give access to it in a nice, easy-to-comprehend API. This seemed like the ideal tool for the job, so with the help of the draft, this Microsoft article from 2002 and this very helpful article by Imran Nazar, that helped me understand the format more quickly, I declared the format in Kaitai Struct and it worked like a charm! And now for the “easy part”. Using this awesome tool I converted all the XML to go structs. Because this is SOAP, I had to also make a prepare method for each struct that would set the static attributes as schemas, XSD URLs, encoding styles etc. Then I used the Kaitai Compiler to compile the .ksy definition to Go. Then all that was left was some basic CLI code on top of it :) sijisu@ThinkSUSE ~ $ hpsimplescan HPSimpleScan - simple scanning for some older HP printers/scanners, especially the HP LaserJet 100 colorMFP M175nw hpsimplescan [global options] command [command options] [arguments...] status, i get the current scanner/printer status scan, s scan from the scanner platen to file scanadf, sa scan from the scanner ADF to folder help, h Shows a list of commands or help for one command -i IP IP or hostname of the scanner/printer to connect to (default: "192.168.1.3") -p port port of the SOAP API on scanner/printer (default: 8289) --debug, -d debug output (default: false) --verbose, -v verbose output (default: false) --help, -h show help (default: false) --version, -V print the version (default: false) Recently I’ve added support for the ADF (because I needed it lol), but many features are still missing. We will see if they will ever make it. While writing this article I found that the SOAP API is in fact almost identical to WSD (the kinda scanning standard from the beginning, do you remember?). The CreateScanJobRequest, GetScannerElementsRequest are identical, GetJobInfo is missing and RetrieveImageResponse explicitly specifies MIME as the response format. So maybe my printer has some first unfinished prototype of WSD? I think sometimes it’s okay to get angry at things because it can force you into making them better and, if nothing else, learn something in the process. If I just kept using an older version of the app (or just used Windows :P), I would have been fine, but would never learn about the intricate world of scanning and obscure formats. You can find the project on my Gitlab
OPCFW_CODE
Prevent extra spacing between lines from \raisebox I create superscripts for indexes for notes manually with \raisebox. What I don't like is that Latex changes the line spacing to fit a manual superscript into a line. For instance: \documentclass{article}\usepackage{setspace}\begin{document} \scriptsize \parbox{3cm}{ \begin{spacing}{0.8} TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST {\tiny\raisebox{3pt}{b}}TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST\end{spacing}} \ \ \ \ \ \ \ \ \ \parbox{3cm}{ \begin{spacing}{0.8} TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST\end{spacing}} \end{document} creates (without the black lines) Left \parbox with manual superscript (-> too much spacing in line before the superscript), right \parbox without (-> equal spacing). How can I get rid of the extra spacing in the left \parbox. \raisebox can be replaced if there is a better way. It is ok if the superscript overlaps with the text in the previous line. You need to use the (first) optional argument of \raisebox to compensate. It allows you to tell LaTeX how high the raised box officially is. So setting it to 0pt would ignore its height. There is also a second optional argument which determines the depth. The dimensions \height, \width, \depth and \totalheight can be used in all three arguments and hold the original dimensions of the content. \documentclass{article}\usepackage{setspace}\begin{document} \scriptsize \parbox{3cm}{ \begin{spacing}{0.8} TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST {\tiny\raisebox{3pt}[0pt]{b}}TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST\end{spacing}} \ \ \ \ \ \ \ \ \ \parbox{3cm}{ \begin{spacing}{0.8} TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST\end{spacing}} \end{document} I normally use \raisebox{verticalshift}[0pt][0pt]{text}, but this will make the box have zero height/depth. What are the best values to supply for the two optional middle arguments to give the box exactly ordinary line height/depth? I would think that this is relevant if the line consists only of such material. @LoverofStructure: You can use the dimension macros mentioned in the answer: \raisebox{<shift>}[\height][\depth]{text} will typeset a raised (or lowered for negative shifts) "text" but with the original height and depth. You can also do calculations based on the original dimensions if you use the calc package or e-TeXs \dimenexpr .. \relax. Also, my adjustbox package provides similar modificators which all allow arithmetic expressions. Many thanks! The reason why I asked is that I wasn't sure whether the dimensions of \height, \depth, etc are those of the supplied text or those of any potential text around the \raisebox; it's the latter I am interested in. Could you very briefly confirm? @LoverofStructure: Yes, it is the height and depth of the content of \raisebox (see the last sentence of my answer again). Macros do not have knowledge about the potential text around them. So is there actually no way to specify "height and depth of ordinary text around as if it were there (such as Word Word \raisebox{...}[...][...]{...} Word Word"? If I have an entire line or paragraph full of text that is \raisebox-modified, I would still LaTeX to pretend that the \raiseboxes weren't there and every line has instead ordinary default height. (I was gonna create a separate TeX.SE question for that, but I figured it's very similar to this one.) @LoverofStructure: I don't understand why you need to do this with the text around the raised material. Simply use \raisebox{<shift>}[\height][\depth]{text} to make LaTeX take the original height and depth of the raised content. The height and depth of the line is always the maximum height and depth of any content, so as long the rest of the line isn't higher or deeper than the native height and depth of the raised text you will get a normal line. Thanks. Your last sentence is key: I didn't know that the default height/depth is always ordinary text height, no matter what's in the line. With this said, if I really don't want the line height to be affected, I might as well use [0pt][0pt] instead of [\height][\depth], because I actively want the dimensions of text to not affect the lines. Generally, you could use \smash to remove a height, such as {\tiny\raisebox{3pt}{\smash{b}}} Why don't you simply use \textsuperscript? \documentclass{article} \usepackage{setspace} \begin{document} \scriptsize \parbox{3cm}{ \begin{spacing}{0.8} TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST \textsuperscript{\smash{b}}TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST\end{spacing}} \ \ \ \ \ \ \ \ \ \parbox{3cm}{ \begin{spacing}{0.8} TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST TEST T EST TEST TEST TEST TEST TEST TEST TEST\end{spacing}} \end{document}
STACK_EXCHANGE
Three questions that could improve your Agile team This post is contributed by App Dev Manager Justin Scott who asks three very important Agile questions. Agile has been adopted by many successful companies who value quality, incremental change on a more frequent basis compared to a team that uses a waterfall development methodology. Teams that embark on the journey of introducing Agile often hear about all the good that comes from Agile, but rarely are informed about all the challenges they will face putting Agile into place. Below are three questions to ask about your Agile team that might help take your team to the next level: Is your development team too siloed? Is your team arranged so that work in specific areas always goes to the same developer? This can feel efficient, but in the long term this is a bad practice for a few reasons. First, when estimating story points and tasks, a siloed team will not have the back and forth banter that results in the wisdom of a group to help keep wild estimates in check. To remedy this, look towards finding opportunities among the team to work on areas that they are not as familiar with. A good start might be that each team member try to take on 20% of their work with items they are not as familiar with as other members of the team. While this may not be as efficient in the short term, it will begin to pay off dividends to the team as more team members begin to be able to work on different parts of the code. This will also help reduce risk when developers leave the team. Are you getting full value from the retrospectives? The retrospective is probably the single most important ceremony on an Agile team. Its often an uncomfortable session for many developers because it forces the team to look inward and be critical of current processes. So many teams tend to undervalue this by shying away from the real problems and just getting through the meeting. These same resources sometimes have no problem complaining about the team at the watercooler. The trick to making this meeting work includes fostering a team atmosphere that embraces getting real about what is going on. The key is for the team members to be tactful, and still deliver the tough feedback that fosters positive team changes. There are also different ways to conduct a retrospective meeting that can keep feedback anonymous if open communication is not yet embraced. One such way is to through the use of sticky notes in a start, stop, continue format of retrospectives. In this type of session, each participant is given a stack of sticky notes and is given 5 minutes to write down individual ideas they believe should be started, stopped or continued. The end result looks like the diagram below. The scrum master on the team then goes over to the board and reviews each one publicly by reading each sticky note text and discussing it a bit among the team. After going through each one, the team agrees on one or two changes that can be applied to the next sprint. Are planning sessions being fully embraced? Teams that are new to Agile sometimes feel that they are in more meetings than they are used to. This is especially true for the sprint planning meetings. If the team is doing two week sprints, it is typically recommended that the sprint planning meeting is around 4 hours. The thought of a four-hour meeting can give chills down some developer’s spines. As a result, some teams may elect to skimp on the planning in an effort to get back to their desk. This can hurt a team because the planning session is where much discussion and estimations occur. Doing this valuable exercise subpar, can add to erratic velocity that can lead to distrust between the developers and a product owner. The planning sessions also act as a great cross training mechanism so that everyone on the team gets to hear about the upcoming needs even if they don’t end up working on those particular areas. To fix this, use the full time planning time allotted, and understand that when a team is newer or does not have a good understanding of the product, additional time might be needed. In summary, having a productive Agile team has its challenges. An ideal team is constantly looking for ways to improve both at an individual and team level. Answering the three questions above can help a team overcome a few of these challenges. Premier Support for Developers provides strategic technology guidance, critical support coverage, and a range of essential services to help teams optimize development lifecycles and improve software quality. Contact your Application Development Manager (ADM) or email us to learn more about what we can do for you.
OPCFW_CODE
The topic of this article may not meet Wikipedia's notability guidelines for products and services. (February 2014) (Learn how and when to remove this template message) |Developer(s)||Nippon Telegraph and Telephone & Preferred Infrastructure| 0.4.3 / April 19, 2013 |License||GNU Lesser General Public License 2.1| Jubatus is an open-source online machine learning and distributed computing framework that is developed at Nippon Telegraph and Telephone and Preferred Infrastructure. Jubatus has many features like classification, recommendation, regression, anomaly detection, and graph mining. It supports many client languages C++, Java, Ruby, and Python. Jubatus uses Iterative Parameter Mixture for distributed machine learning. - Multi-classification algorithms: - Recommendation algorithms using: - Regression algorithms: - Passive Aggressive - feature extraction method for natural language: - Ryan McDonald, K. Hall and G. Mann, Distributed Training Strategies for the Structured Perceptron, North American Association for Computational Linguistics (NAACL), 2010. - Gideon Mann, R. McDonald, M. Mohri, N. Silberman, and D. Walker, Efficient Large-Scale Distributed Training of Conditional Maximum Entropy Models, Neural Information Processing Systems (NIPS), 2009. - Crammer, Koby; Dekel, Ofer; Shalev-Shwartz, Shai; Singer, Yoram (2003). Online Passive-Aggressive Algorithms. Proceedings of the Sixteenth Annual Conference on Neural Information Processing Systems (NIPS). - Koby Crammer and Yoram Singer. Ultraconservative online algorithms for multiclass problems. Journal of Machine Learning Research, 2003. - Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, Yoram Singer, Online Passive-Aggressive Algorithms. Journal of Machine Learning Research, 2006. - Mark Dredze, Koby Crammer and Fernando Pereira, Confidence-Weighted Linear Classification, Proceedings of the 25th International Conference on Machine Learning (ICML), 2008 - Koby Crammer, Mark Dredze and Fernando Pereira, Exact Convex Confidence-Weighted Learning, Proceedings of the Twenty Second Annual Conference on Neural Information Processing Systems (NIPS), 2008 - Koby Crammer, Mark Dredze and Alex Kulesza, Multi-Class Confidence Weighted Algorithms, Empirical Methods in Natural Language Processing (EMNLP), 2009 - Koby Crammer, Alex Kulesza and Mark Dredze, Adaptive Regularization Of Weight Vectors, Advances in Neural Information Processing Systems, 2009 - Koby Crammer and Daniel D. Lee, Learning via Gaussian Herding, Neural Information Processing Systems (NIPS), 2010. |This free and open-source software article is a stub. You can help Wikipedia by expanding it.| |This computing article is a stub. You can help Wikipedia by expanding it.|
OPCFW_CODE
Many of you might have been developing applications in Net Core 2.0 version, as that was the main release with lots of features. But did you know that the next big release has already happened with some additional features in Net Core journey? Have you already migrated your applications to Dot Net Core 2.1.x version already? If you haven’t, no worries, here in this post, we are going to cover talk about how we can migrate/update our applications to the new version of Dot Net Core. I hope you will like this article. As I said, this post is only going to cover the steps you may have to take to update your existing Dot Net Core application to 2.1.x. And I will be not be talking about what is Dot net core? And why dot net core is important? etc, if you have those queries in your mind, I strongly recommend you to read the Microsoft document here. We will be creating a Dot net Core 2.0 application first, and then we will update the same to 2.1.x version. Sounds good? .Net Core 2.0 to 2.1.x Let’s create a .net core 2.0 application As usual open your favorite IDE, Visual Studio 2017, Click on File -> New Project As you can see, Asp Net Core version is 2.0. You can even check the version in your application properties. Now let’s run our application and see the output. Update Visual Studio 2017 to the latest As a first step, to work with Dot net core 2.1 you need to update your IDE. There are a couple of ways you can update your IDE, but the easiest way is to go check the Notifications panel. Let’s see how to do that. Go to the View tab and click on Notifications Now you can see all the notifications here as shown in the below image. I always update my Visual Studio whenever there is an update, so you won’t be able to see that notification here on my panel. If you haven’t updated, and if you check the Notifications panel, I am sure you will get a link to the update here. You will be able to do further. Once you have updated, you can verify your version. Install latest Dot Net Core As you can see that we have installed Dot Net Core SDK, Dot Net Core Runtime, Asp Net Core Runtime, and now you can see that the new version is available for you in Visual Studio 2017. Changing the Target frameworks As the first step, we need to change the Target framework for our application. Here you can select it in the Properties or you can edit the dot(.)csproj file. Now it is time to change the Microsoft.AspNetCore.All to Microsoft.AspNetCore.App, as with this version Microsoft have reduced the number of dependencies. We always need to install only the needed things right, instead of everything. You will always need to change all of your references to the new version. Here I just wanted to show how you can do that. So, look into the packages you have in your application and change it accordingly. What is new in Dot(.) Net Core 2.1.x The Dot Net Core 2.1.x is a release with some cool new features, I strongly recommend you to read those here. I may write a new post with this features soon. Thanks a lot for reading. Did I miss anything that you may think which is needed? Could you find this post as useful? I hope you liked this article. Please share me your valuable suggestions and feedback. Your turn. What do you think? A blog isn’t a blog without comments, but do try to stay on topic. If you have a question unrelated to this post, you’re better off posting it on C# Corner, Code Project, Stack Overflow, Asp.Net Forum instead of commenting here. Tweet or email me a link to your question there and I’ll definitely try to help if I can.
OPCFW_CODE
our relationship is a bit special. I came in February and it resulted in a big discussion on MTAs and the like. I came again recently, had been active, proposed improvements, but feel like running agains walls. The point is, we collide at any point. It's the community on the one side and me on the other. At least this is how it feels to me. I realize that my opinions and point of view is quite different from your's (at least of those who speak up). I trust that all opinions expressed on the issues have been based on good reason and intent. Please don't feel that any sides are being taken, especially against an individual. Suggestions for improvements are always welcome by me, and by others from what I've observed here. The main topic of our disagreement is compatibility. As I'm sure you're aware, many of us have been using mh/nmh for a very long time (decades!). Whatever its deficiencies, it works. Furthermore, I use nmh a _lot_. So compatibility is quite critical. It seems to me as if you would be doing compatibility for compatibility's sake. This is sticking to old cruft. Caring to much for some old userbase likely keep you from getting new users while old ones slowly vanish. This also includes frontends. It is a dead end. If it's old cruft that works, I'm happy to stick to it. If anyone wants to fix things under the hood, without breaking anything, or add new capabilities, great! I value clearer and simpler solutions above compatibility in any case. I understand the importance for compatibility in case of a backend, but it should never be for it's own sake, but this is what I feel here again and again. Is nmh just good enough for you and therefore better not changed? Is updating your setups once a year more effort than the improvements of modernization? It could be and I would understand. The point is: What is the goal of nmh? At this point, compatibility has got to be part of nmh's goal. It was even at its inception, according to docs/README.about: "intended to be a (mostly) compatible drop-in replacement for MH" That's what I don't understand. No matter what I try to do, I conflict with you. This indicates that we probably have too different views of With pleasure I see the discussion of nmh2 which could finally be a step in my direction. But before I cheer too much up, I'd better know: What's the goal for nmh2, if it should come to happen? Good question. I think it can allow more rapid evolution of nmh because it wouldn't be as bound by compatibility and 1) It can be a fork of the current nmh code base. 2) It can be a proving ground for new ideas and implementations. If some are deemed suitable for inclusion into nmh, that's fine. If not, that's fine, too. It should be useful and usable on its own. 3) It can sacrifice compatibility, whereas nmh must try to avoid user-visible, incompatible changes. 4) It can be implemented in languages other than C. 5) It need not be portable to odd (say, non-POSIX) platforms. I don't care what "it" is called. Having the goals clearly stated would allow me to figure out if it's worthwhile for me to try to add value to this community and project. If someone has personal opinions on this subject, I welcome them too. Nmh is clearly your project and not mine, besides being Free Software. I don't want to sail in your waters if you don't like. I don't think it's necessary or useful to think in such terms as "your" and "mine". And I, for one, welcome your (and anyone This email has been scanned by the MessageLabs Email Security System. For more information please visit http://www.messagelabs.com/email Nmh-workers mailing list
OPCFW_CODE
1. Link whole contents of existing SFS file to new SFS file. 2. Link single item in existing SFS file to existing SFS file. 3. Link part of a foreign format (including binary) file to an item in an existing SFS file. The program takes two files, the second of which ("destfile") must be a SFS file. In the first mode, "destfile" must not already exist. The primary file ("xxx_source") can one of the following: SFS file binary file, single channel/multiple channels RIFF format file (.WAV), single/multiple channels VOC format file, single/multiple channels AU format file, single/multiple channels AIFF format file, single/multiple channels ILS format file, single channel HTK format file, single channel PCLX Tx data file, for access as SFS Tx itemFiles on the host machine are specified with a filename. Files accessed through a file server running the UCL protocol are specified with a host name ("networkname") and a filename. For SP and LX items in SFS or binary files, a part of the primary data set may be linked, using specified start and end times (in seconds or samples). Binary data may be transformed by (i) selecting channels, (ii) swapping bytes, (iii) removing a DC offset and (iv) shifting bit pattern up/down word. Options and their meanings are: -I Identify program and exit. (sfs source) Select a source item number. Each item is linked. (binary source) Specify datatype. Single item is linked. -f freq (binary source)Specify sampling frequency in Hz. -t filetype Specify file format. Options are: RIFF, WAV, VOC, AU, AIFF, ILS, HTK, PCLX. Default: RAW (straight binary). -c chan/#chan Specify channel number and number of channels for multiplexed data. For example: two-channel data acquired and first channel required: "-c 1/2". Default is "-c 1/1". -C chantype Specify speech and/or Lx for stereo linkage. Use -C11 for stereo speech, -C12 for speech and Lx, -C21 for Lx and speech, -C22 for stereo Lx. -s start Specify start time for linked item (in seconds). -S start Specify start time for linked item (in samples). -e end Specify end time for linked item (in seconds). -E end Specify end time for linked item (in samples). -b (binary source)Swap bytes in sampled data. -d DC (binary source)Subtract value given by "DC" from samples. Performed after byte swapping. -m mult (binary source)Shift sampled bit pattern up (positive shift) or down (negative shift) by the number of bits specified in "mult". Performed after byte swapping and DC offset correction. Values of shift greater than 15 are reserved for manipulation of 8-bit values internally. -h headerlen (binary source)Skip 'headerlen' bytes before starting the link. Useful for skipping fixed length headers prefixed onto raw binary - providing you know how long they are. -r Use a relative path to the source file. Default: absolute path.
OPCFW_CODE
blockchain.info says my tx is double spend. Which tx use the same utxo? My transaction is marked as double spend. I don't remember when I spent it twice. How can I see what transaction used the same utxo? https://blockchain.info/ja/tx/60821723b93e2ae5ed729e93c22ca824e7e91fe5a16cba3468139657dc953abc You can find doublespends by looking at the addresses that the inputs were previously associated with. At least one of the "send-addresses" will show another transaction that spends one of in the inputs that your transaction is also claiming. In this case, this is particularly easy, as there is only one input, and therefore only one previous transaction output to consider. When you look at blockchain.info's page for the address 12sWrxRY7E7Nhmuyjbz4TtGE9jRewGqEZD, you'll notice that there are two competing transactions trying to spend the input. In regard to which of the transactions may be confirmed: Obviously, only one of them can ever confirm. In this case, their fee rates are 7.5 and 8.95 satoshi/byte, so likely neither will confirm. In other cases, it depends on the transaction selection strategy of the miner. AFAIK, first-seen remains the standard policy, i.e. nodes and/or miners accept, relay, and confirm the transaction that was first seen and shun the other as a doublespend. However, especially in a time with fee events and crazy mempool sizes, I'd expect more and more pools to adopt a replace-by-fee policy where they would prioritize the transaction with the higher fee. Not only do miners earn more that way, and the transaction will be up for confirmation quicker, but this also gives people a better chance to update urgent transactions that didn't have a sufficient fee the first time around. Since this is the obvious direction transaction selection will progress, let me use this opportunity to reiterate that unconfirmed transactions are a payment promise and not a reliable payment. Accepting zero-confirmation transactions is a bet which becomes less reasonable by the block. 87f2bfc4a3f775d27497bb41c52acf5b4e264c303f8a34fb48e3bf5d3d9c6218 https://blockchain.info/tx/87f2bfc4a3f775d27497bb41c52acf5b4e264c303f8a34fb48e3bf5d3d9c6218 I've downvoted this answer because it doesn't explain and won't be useful to anyone else. Let's prioritize teaching people how to fish instead of providing fish.
STACK_EXCHANGE
Tunis, November, 2023, Participants of Open Source Hardware Building Workshops at CETTEX and at CETIBA learned how to build CNC machines and Laser Cutters, so that they are able to use, maintain, advance, modify and repair the machines on their own in the future. The New Production Institute of the Helmut Schmidt University / University of the Federal Armed Forces is implementing an Open Laboratory for Digital Manufacturing to promote innovation and local value creation with digital fabrication tools towards an Industry 4.0 in Tunisia. This activity is part of the National Initiative “Towards an Industry 4.0 Tunisia” launched by the Ministry of Industry, Mines and Energy, supported by the Special initiative “Decent Work for a Just Transition” – Invest for Jobs of the German federal ministry for Economic cooperation and Development (BMZ) and implemented by the Digital Transformation Program of the GIZ Tunisia in partnership with experts from Helmut Schmidt University. The training workshops were implemented for consultants from industrial technical centers as part of the project to set up an “Open Laboratory for Digital Manufacturing” at technical centers in Tunisia. The open digital manufacturing lab (OpenLab Tunisia) will be a space for experiencing and experimenting with relevant digital manufacturing technologies in the hardware domain bringing different stakeholders and partners together. Together with the project partners, the participants in the current workshops are therefore building open source machines themselves as part of the digital machinery of the future Lab. The workshops were organized by the project management of Helmut Schmidt University/University of the Federal Armed Forces Hamburg, Germany. The workshops were conducted by the open source hardware specialist InMachines Ingrassia GmbH in cooperation with Fab619, an open source hardware specialist in Tunisia. The project is funded by GIZ. The specific goal of these workshops was to build a CNC machine and a laser cutter with participants of Tunisia’s technical centers. These CNC machines can be used for manufacturing parts of medium complexity made of wood, plastic, or metal, such as furniture, metal gears or other components for machines, as well as the creation of more artistic items. The Open Source machines are cheaper than comparable commercial machines given that its development is shared among a large community, lowering the development costs. But also due to low maintenance costs, given that all the documentation, including blueprints and specifications are available online for free. This also makes these types of machines extremely attractive, because by having all the documentation at hand it is possible to understand and adapt the machines for new tasks for a specific use case. The project manager Dr. rer. nat. Juan M. Grados Luyando of the New Production Institute at the Helmut Schmidt University describes the importance of the actual step as follows: “With these last workshops focusing on CNC machines and laser cutting, we are one step closer to complete the setup of the OpenLab at CETTEX as well as to fulfill our goal of providing the knowledge and machines to the different Technical Centers in Tunisia. We are excited to open the OpenLab space to the community of talented engineers in the Technical Centers and see how they come up with new creative ways of solving industry problems with these digital manufacturing machines.” Photos by GIZ Tunisia Video: CNC Workshop by GIZ Tunisia Read more about the project via: https://newproductioninstitute.de/digital4jobs
OPCFW_CODE
Alice Animation is an innovative 3D programming environment that makes it easy to create an animation for telling a story, playing an interactive game, or a video to share on the web. Alice is a freely available teaching tool designed to be a student’s first exposure to object-oriented programming. It allows students to learn fundamental programming concepts in the context of creating animated movies and simple video games. In Alice, 3-D objects (e.g., people, animals, and vehicles) populate a virtual world and students create a program to animate the objects. (www.alice.org) Scope of Curriculum You can view our Music Video curriculum here, or our Video Game curriculum here. YWiC first incorporated the Programming with Alice curriculum in the 2006 summer camp for high school students. This animation curriculum has been a camp staple and is taught the entire duration of the camp, excluding the first days set aside for Core Concepts. The course had previously been taught by professors and graduate students, but is now shifting into undergraduate hands. Concepts are taught to students daily, including (but not limited to) objects, conditionals, loops, events, and methods. For the first half of the lessons, instructors and students build the same project in order to learn a concept. During the second part of the lessons, students work on their own projects using the concept taught that day. Over the course of four weeks, students complete their own Alice music video project and video game project. Instruction usually lasts approximately 1-1.5 hours for 9-10 days, for a total of 9-15 hours of instruction. Teaching time will vary depending on class size and student’s prior knowledge of computational thinking concepts. Below are examples of two music videos created in Alice by members of YWiC. For more information about Alice, project ideas, or to download this software, visit the Alice website. Participants learn how to program in Storytelling Alice and create projects/stories based on what is learned. Campers meet in groups of 2 or 3 to promote communication and interactive skills. When teaching them how to create or modify methods and loops, the camp instructors also teach the basics of for loops and while loops, conditionals, and how to count and convert numbers in binary. At the end of the Storytelling Alice session, students present their final projects, explain what is used, and how to implement events, loops, etc. Scope of Curriculum Storytelling Alice was first included in YWiC’s summer camp for middle school girls in 2011. This curriculum has become an essential part of our camps, and is typically taught to students by undergraduate students over a period of 5-7 days. Teaching and student experimentation typically lasts from 1.5-2 hours per day, for a total of 7.5-14 hours – this time will vary based on class size and students’ prior knowledge of computational thinking concepts. A different concept is taught to students daily, including (but not limited to) objects, conditionals, loops, events, and methods. Instructors usually create and show a small Storytelling Alice project that demonstrates the concept that will be taught. For the first part of the lesson, instructors and students build the same project in order to learn a concept. For the second part of the lesson, students work on their own project using the concept taught that day. At the end of the teaching period, students will have completed two projects: the “group” project that everyone works on to learn the concepts, and their own personal project where they put what they’ve learned into action. To view the PowerPoint we use for instruction, click here! Also, you can view the step-by-step instructions we use to create a project with the campers. To view the students’ final project guidelines, click here! Below is an example project that a student might create using Storytelling Alice. For more information about Storytelling Alice, project ideas, or to download this software, visit the Storytelling Alice website. East Picacho Elementary In a collaboration with East Picacho Elementary school and Enrich the Kids, YWiC established an 8-week, after-school computing program that brought together 20 students in grades 3rd – 5th. YWiC taught basic computational concepts using Storytelling Alice, and included story-board outlines, teamwork, and presentation skill development. Check out the lesson plans we used here.
OPCFW_CODE
# Created by Daniel Thevessen import gurobipy as gb import sys import pandas as pd from collections import defaultdict class RelevanceOptimizer: gb.setParam('OutputFlag', 0) def _calculate_single_feature_relevance(self, columns, relevances, cost_matrix=None): # New approach, keep class relevances separate if cost_matrix is not None: for index, df in relevances.items(): df.index = [index] dataframe = pd.concat(relevances.values) classes = {col: dataframe[col] for col in dataframe.columns} else: classes = {'-1': relevances} single_relevances = defaultdict(int) for class_col, class_scores in classes.items(): # print(class_col) m = gb.Model('rar') n = len(columns) max_score = max(class_scores) solver_variables = {} for col in columns: solver_variables[col] = m.addVar(name='x_' + col, vtype=gb.GRB.CONTINUOUS, lb=0, ub=max_score) vars_average = m.addVar(name='s', vtype=gb.GRB.CONTINUOUS) vars_sum = sum(solver_variables.values()) m.addConstr(vars_average == (vars_sum / n)) m.setObjective(vars_sum + self.__squared_dist(solver_variables.values(), vars_average), gb.GRB.MINIMIZE) m.update() for (subset, score) in class_scores.iteritems(): objective_vars = map(lambda col: solver_variables[col], subset) objective_sum = sum(objective_vars) m.addConstr(objective_sum >= score) m.optimize() var_max = max(map(lambda v: v.x, solver_variables.values())) class_result = {k: (v.x / var_max) for k, v in solver_variables.items()} for k, v in solver_variables.items(): single_relevances[k] += (v.x / var_max) * (1 if cost_matrix is None else cost_matrix[class_col][0]) # print(str(v.x / var_max) + ' weighted to ' + # str((v.x / var_max) * (1 if cost_matrix is None else cost_matrix[class_col][0]))) for k in single_relevances.keys(): single_relevances[k] /= (1 if cost_matrix is None else cost_matrix.iloc[0].sum()) return single_relevances def __squared_dist(self, variables, mean): return sum(map(lambda v: (v - mean) * (v - mean), variables))
STACK_EDU
// DynamicArray.cs (c) 2003 Kari Laitinen using System ; class DynamicArray { int[] array_memory_space ; int number_of_integers_in_array = 0 ; int current_array_size = 0 ; void enlarge_this_array( int desired_array_size ) { if ( desired_array_size > current_array_size ) { int[] new_array_memory_space = new int[ desired_array_size ] ; for ( int integer_index = 0 ; integer_index < number_of_integers_in_array ; integer_index ++ ) { new_array_memory_space[ integer_index ] = array_memory_space[ integer_index ] ; } array_memory_space = new_array_memory_space ; current_array_size = desired_array_size ; } } public void add_integer( int integer_to_the_end_of_array ) { if ( current_array_size <= number_of_integers_in_array ) { enlarge_this_array( number_of_integers_in_array + 4 ) ; } array_memory_space[ number_of_integers_in_array ] = integer_to_the_end_of_array ; number_of_integers_in_array ++ ; } public int Length { get { return number_of_integers_in_array ; } set { enlarge_this_array( value ) ; number_of_integers_in_array = value ; } } public int this[ int index_value ] { get { return array_memory_space[ index_value ] ; } set { if ( index_value >= number_of_integers_in_array ) { Console.Write( "\n\n Too large index: " + index_value ) ; } else { array_memory_space[ index_value ] = value ; } } } } class DynamicArrayTester { static void Main() { DynamicArray array_of_integers = new DynamicArray() ; for ( int integer_to_array = 990 ; integer_to_array < 1000 ; integer_to_array ++ ) { array_of_integers.add_integer( integer_to_array ) ; } for ( int integer_index = 0 ; integer_index < array_of_integers.Length ; integer_index ++ ) { Console.Write( " " + array_of_integers[ integer_index ] ) ; } array_of_integers[ 3 ] = 333 ; array_of_integers[ 13 ] = 888 ; array_of_integers.Length = 14 ; array_of_integers[ 13 ] = 888 ; Console.Write( "\n\n" ) ; for ( int integer_index = 0 ; integer_index < array_of_integers.Length ; integer_index ++ ) { Console.Write( " " + array_of_integers[ integer_index ] ) ; } } }
STACK_EDU
In software development, debugging involves locating and correcting code errors in a computer program. Debugging is part of the software testing process and is an integral part of the entire software development lifecycle (SDLC). The debugging process starts as soon as code is written and continues in successive stages as code is combined with other units of programming to form a software product. The debugging process can be made easier by using strategies such as unit tests, code reviews as well as debugging tools embedded in code editors such as the VS IDE. The need to debug can occur at any level. At Compile time or Run time. With the use of intellisense in the VS IDE many compile time issues can be caught immediately as your coding and resolved. The more difficult issues usually occur during run time. And there are 2 areas of run time errors. Each requires it’s own way to approach. - Errors that cause the application to break. - Logic errors, where the application does not stop but the results are not what is expected. Debugging tools can be found in most software development applications. In VS the use of Visual Studio Debugger is an invaluable resource. With it you run your app in debug mode do all of the following: - Set breakpoints - Navigate and step through code using step commands - Step over code to skip functions - Step into a property - Run to a point in your code quickly using the mouse - Run to a cursor (sets temporary breakpoint on current line) - In some cases ‘edit’ code on the fly to continue debugging - Inspect variables - Examine the call stack There are as many ways to approach debugging as there are developers. Early in my career I came to realize that debugging skills are vital to doing a great job so I searched until I found what I thought was the most comprehensive way to debug any issues that may arise. I combined these ideas and wrote them down, then I found this diagram that was created by Duncan Riach (Hackernoon.com). By following some or all of these steps the vast majority of bugs can be resolved relatively quickly. Bugs can occur at any point in the development stack. On the client side, within the business logic layer (BLL), data access layer (DAL), on the server, on a client machine (configuration), it can even be a bug caused by how the user interacts with the application. By following the above steps most all issues can be resolved in a relatively short period of time. Debugging tools are a part of most software packages. - Visual Studio Debugger - Exception Handling - Logging (Log4Net, database logging..etc) Below is a ‘simple‘ example of a run-time bug / issue identified by the user of the application. The user is entering search criteria, but no results are being returned. The user knows that there should be search results. - Developer Tools (Chrome, Firefox, IE etc..) - SQL Server Profiler - Debugger (Breakpoints, step into and over code)
OPCFW_CODE
Should I use Events in a XNA Game? I created a button class which draw a button on the screen. When I click on it, I want to see something happening. In WinForm, I would simply use the event OnClick of the button. What about XNA? Should I add an OnClick event to my button class? Is that a good practice? Otherwise, how do you handle events in your game loop? You're going to have to write your own onClick event. Something that takes the current mouse position and matches that against where your button is The only reason against using event in a game is that creating a delegate to attach to the event handler creates a heap object that can cause a garbage collection which can cause a frame-rate hiccup on on Xbox 360 (and possibly WP7, haven't tested it). In general, this should not be relevant to a game UI that you set-up once and simply let run. Also, calling an event handler is a tiny, tiny, tiny bit slower some other available methods. And this is also absolutely irrelevant for a UI. (It only comes into play for micro-optimised number-crunching). So, as long as you're not running around assigning event handlers all willy-nilly, then the choice of using an event in a game is no different to the choice of using one in a regular application. Copying the design of WinForms for your game UI is perfectly fine. (It's worth pointing out one caveat of events is that they are "hidden" strong references that can unintentionally keep objects alive if you don't remove the handler. This is relevant to both games and regular applications) There are three common approches I saw in game engines, I'm not sure about XNA supporting all three but here they are : you can inherite a class from your button class and override OnClickFunction in your CustomButton class. The benefit is you don't have to check if the button is pressed it informed you whenever it's clicked but it may cause to overuse of inheritence. you can define some function OnClick and pass it to OnClickEvent butten class (just like normal c# event handleing). this has the same benefit as the previous one and you don't need to worry for inheritence overuse. you have to check in your main loop if this button is clicked or not. in this method it's not realy event. so the disadvantage is you have to check yourself whenever you have to do somthing based on button inputs but it has a advantage that you can control where the buttons really are checked and take effect. I stopped using .net events and started to store all my events and changes in buffer: public sealed class GameStateData { public bool IsEmpty = true; public List<EventData> EventDatas = new List<EventData>(); public List<ChangeData> ChangeDatas = new List<ChangeData>(); ...//your additional data public void Reset() { IsEmpty = true; ManifoldDatas.Count = 0; ChangeDatas.Count = 0; } } public struct ChangeData { public Node Sender; public short PropertyIndex; // predefined property index (f.e. (short)MyIndexEnum.Life) public object OldValue; public object NewValue; public object AdditionalData; } Pros: For multithreading support the only thing you need is to swap buffers and process all events and changes in render thread. Every gamestep change will be saved, so any game logic can be based on combination of events. You can even implement time reversing. You don't need to create event fields increasing size of single object. Game logic in most cases require only single rare subscriber. There is almost no additional memory allocation (excluding boxing cases) (WP7,XBox and other micro framework critical). You can have different event sets handling a predefined GameStateData to object. Cons: More memory spending, although you need only 3 buffers (game, swap and render). Writing additional code at start to implement needed functionality. BTW sometimes I use both: event model and buffer model.
STACK_EXCHANGE
iSeries (AS/400) Database File: password encryption I am helping with a project in which an old software system on an iSeries is having a brand new .NET UI applied to it. It's going well... except... In order to allow users to login and maintain compatibility with the existing software installation, we need to figure out what encryption/hashing method the previous vendor was using without access to their source code. I have a file with an ID and Password column. The password column appears to contains only 16 characters per record, all binary. Part of the previous vendor system was written in native green screen on the 400, and part of it was written in Microsoft ASP.Net. What type of encryption or hash would be: Used by an AS/400 or iSeries Green Screen app, and Used by a Microsoft .NET app, and Output a consistent 16 binary bytes, regardless of input length Pointers much appreciated. Thanks! There are many built-in and third party encryption schemes for the i. Your best bet is to find the API that the vendor uses in their applications or ask them directly. A well-designed application would have that log-in code in one spot. Note: I have dealt with enough vendors to know that what I am saying is like asking you to move the Eiffel Tower 2 inches to the left. First port of call is the system manual for the old system. After that contact the supplier, and assuming you paid for support (you did pay for support, didn't you), get their technical support people to answer your question. If that doesn't get you anywhere, you have to start digging. Sixteen characters is 128 bits, so you probably have a 128 bit hash of something. Most likely MD5, especially if the original code dates from about 1991 to 1996. Next you need to decide if it is adding a salt to the password before hashing it. Create two new user accounts on the old system with different usernames and the same password. Say "user1/password" and "user2/password". Now look at the password file and locate the two new entries. If the two hashes are the same then no salt was used, and you probably have a simple hash of the password. If not then try the MD5 hash of simple combinations of the username and password: user1password passworduser1 user1:password password:user1 etc. If one of these works then you have solved it. If not then you are going to spend a very long time building Rainbow Tables and all sorts of other cryptanalytic stuff. If it gets to that it might be easier to just put a network sniffer onto your network where it hits the old system so you can read your users' passwords before they get hashed. For additional certainty check for the "You logged in correctly" message going back the other way before you record the password. They might mistype it just at the wrong time.
STACK_EXCHANGE
DML not allowed on Profile error message I created a test using the following apex code: static testMethod void testProfile() { Test.startTest(); Task task = [select id, OwnerId from task where OwnerId = '00104']; Profile p = [select id from profile where name='System Administrator']; User u = new User(alias = 'salesf'<EMAIL_ADDRESS>timezonesidkey='America/Los_Angeles', localesidkey='en_US', emailencodingkey='UTF-8', ProfileId = p.Id, Id = task.OwnerId, languagelocalekey='en_US', lastname='Testing', Firstname='Testing', CompanyName= 'xyz', username='iamasalesforce@noemail.com'); insert u; Test.stopTest(); } but I recevied the following error message: System.QueryException: List has no rows for assignment to SObject How can I execute the code in my tets method without receiving such error message? Profile cannot be inserted from code. It is read-only. Do something like this. It might help you: public User createUser(){ Profile p = [select id from profile where name='System Administrator']; //userRole r=[SELECT ID FROM userRole WHERE name =: 'CEO']; // Use it if you need Role User u = new User(alias = 'salesf'<EMAIL_ADDRESS> emailencodingkey='UTF-8',profileid = p.Id,lastname='Testing', Firstname='Testing', languagelocalekey='en_US', localesidkey='en_US',country='United States Of America', CompanyName= 'xyz',Phone='(123) 456-7890',title='Dev',PostalCode='S1A 0E5', timezonesidkey='America/Los_Angeles', username='iamasalesforce@noemail.com'',city='Phoenix',State='Alabama'); insert u; return u; } Thank you for your valuable workaround. I've adde one more line Task task = [select id, OwnerId from task where OwnerId = '0050Y000001EwVyQAK']; because I need my test to be executed for Task object. But when I run the test I get: System.QueryException: List has no rows for assignment to SObject. Is there any workaround and how to avoid such message? Please explain what you changed. @sfdev hey, seems like ur query is not returning any value. That OwnerId = '0050Y000001EwVyQAK'...seems like tha owner id is wrong. Pls chek the owner id once. Better, u run that siql query first in workbench and see if it returns any result. If u dont know how to use workbench, plz google once. And also, hardcoding is not a good practice. But i will tell u how to avoid it after u have queried the same soql in workbench. Get back to me once done Profile is read-only. you aren't able to do DML on profile object. It is because Profiles are described as metadata, and are stored as metadata in xml format.
STACK_EXCHANGE
DNA methylation at CpG dinucleotides is one of the most commonly studied parameters within the epigenome because it is directly assessable and is often reflective of the overall structure of chromatin, which, in turn, contributes to regulation of gene expression at the transcriptional level . While there is a myriad of techniques for analysis of DNA methylation, a number of those used in the past (e.g. reduced-representation bisulfite sequencing , methylated DNA immunoprecipitation sequencing ) have employed enrichment of regions with higher frequencies of CpG dinucleotides to limit the portion of the genome to be sequenced as a means to limit the cost and computational resources required to process and analyze the resulting data. However, these techniques provide only a partial view of the epigenome, typically focused primarily on the impact of DNA methylation on chromatin structure in promoters and exons where CpG dinucleotides are often most abundant . This limits the potential of these techniques to profile DNA methylation in other regions of the genome which also contribute to regulation of gene expression, such as enhancers or regions associated with the boundaries of topologically associated domains . Whole-genome approaches, such as whole-genome bisulfite sequencing (WGBS), yield informative results for the entire genome, and, as such, have become the gold standard for global analysis of DNA methylation with single-CpG resolution . Thus, as sequencing costs have decreased , an increasing number of investigators are opting to utilize this more comprehensive, genome-wide assessment of DNA methylation which yields large, robust datasets [8–10]. However, this more comprehensive assessment of the epigenome mandates a corresponding increase in the extent of computational analysis needed to interpret the resulting larger datasets. Recently, a novel snakemake workflow termed wg-blimp was described as an “end-to-end” pipeline for processing WGBS data by integrating established algorithms for alignment, quality control (QC), methylation calling, detection of differentially methylated regions (DMRs), and methylation segmentation for profiling of DNA methylation states at regulatory elements . The wg-blimp pipeline is simple to install on either a personal computer or in a research high computing cluster, often requiring only an input reference, gene annotation, and FASTQ read files to fully process WGBS data. However, due to the nature and large file sizes of WGBS sequencing data, implementing the wg-blimp pipeline in its current form often requires extended computing time emanating from the conversion of unmethylated cytosines to uracils in the original DNA strand following bisulfite treatment. During PCR amplification these uracils are replaced with thymine, ultimately resulting in the conversion of C-G base pairs into T-A base pairs. Because most cytosines in the genome exist in non-CpG contexts and are thus normally unmethylated, the bisulfite treatment causes a substantial increase in the proportion of T-A base pairs and a concomitant decrease in the proportion of G-C base pairs in the amplified copies of the initially treated DNA strands. This renders mapping of bisulfite-converted reads using a conventional read mapper inadequate, because a large percentage of the converted bases will be called as mismatches relative to the untreated reference sequence. To overcome this limitation, improved ‘3-letter’ aligners such as bwa-meth and gemBS , designed specifically for mapping bisulfite-converted reads, perform a two-stage mapping process. Cytosines on read 1 are fully converted to thymines while guanines on read 2 are fully converted to adenines. The reads are then aligned to either of two reference genomes where either all of the cytosines have been converted to thymines or all guanines have been converted to adenines. After mapping to the converted reference genomes, the read sequences are then restored to the original sequence, revealing methylated Cs which can be identified in further downstream processing. Due to this extensive processing step required for conversion and alignment of all reads to multiple indexed genomes, followed by conversion back to the starting read sequence, the alignment step imposes a very time-consuming computational step. While both bwa-meth and gemBS follow the same “3-letter” alignment mapping concept, there are significant differences in their implementation which translate to large differences in their overall speed due to differences in the underlying alignment software packages from which these specialized methylation aligners were generated. The bwa-meth methylated DNA aligner has a foundation built on the improved BWA-MEM alignment software which follows the seed-and-extend paradigm to find initial seed alignment with super-maximal exact matches (SMEMs) using an improvement of the Burrows-Wheeler transform algorithm [13, 15]. BWA-MEM additionally re-seeds SMEMs greater than the default of 28 bp to find the longest exact match in the middle of the seed that occurs at least once in the bisulfite-converted reference genome, to reduce potential miss-mapping due to missing seed alignments. BWA-MEM also filters out unneeded seeds by grouping closely located seeds which it terms “chains,” thereby filtering out, by default, notably shorter chains contained within longer chains (which are at least 50% and 38 bp shorter than the longer chain) . The seeds remaining in these longer chains are then ranked by the length of the chain to which the seed belongs, and then by the length of the seed itself. Seeds that are already contained in a previously identified alignment are dropped while seeds that potentially lead to a new alignment are extended with a banded affine-gap-penalty dynamic program . While these strategies have increased the potential size of the read that can be aligned using the BWA-MEM software up to 100 bp, the aligner that the gemBS software is built on, GEM3, allows for mapping lengths of up to 1 kb which can scale more quickly to large sequencing analyses while maintaining equal if not superior read mapping accuracy when compared to BWA-MEM . This superiority largely comes from gemBS performing the conversion of read steps before and after mapping “on the fly” for each read pair, thereby avoiding the generation of intermediate files and greatly increasing the efficiency of the mapping process. In addition, GEM3 filters and sorts mapped seeds into groups referred to as “strata” which facilitate complete searches of indexed references to find all possible matches to the reference genome, improving both speed and accuracy over BWA-MEM and other heuristic mapping algorithms . Searching through such a large index file does expose one limitation of gemBS, which is that it requires 48 GB of RAM compared to only 8–16 GB required by bwa-meth. However, this limitation is normally insignificant given that most midrange or higher computers are equipped with more than sufficient RAM to meet this need . We sought to leverage these differences to improve the speed of read alignment in the wg-blimp pipeline. We were able to modify the wg-blimp pipeline by replacing the bwa-meth alignment software with gemBS. This single modification allowed us to increase the overall speed of the wg-blimp pipeline by > 7x and open up the pipeline to the alignment of longer reads, all without sacrificing alignment accuracy.
OPCFW_CODE
I had to setup a new NAS system, because my old one wouldn’t start after cleanng the fans.. In the last week review i already reported that my old NAS system didn’t start after cleaning the fans. All options were tried, like a not cleanly fixed RAM or loose cable connections or anything else like this.I couldn’t get into the BIOS of the system, even though the harddisks were still working. But ok the system lasted a little bit over 5 years. The old system was setup in a LIAN LI Q08 case which is very compact, not saying small. A new mainboard wasn’t available at the local dealers and i didn’t wan to wait 2-3 days for a online order. But a new mainboard caused a new CPU because of different CPU sockets and a RAM update from DDR2 to DDR3 type RAM. After searching the web i decided to get the following hardware components: - Asus B85M-G Mainboard - Intel Celeron G1820 - 4 GB RAM - Corsair Carbide 500R Midi-tower - Kingston 120 GB V300 The mainboard, my first choice was not available, provides 4-SATA600 and 2 SATA300 ports to connect 6 harddisk drives. This was important, beacuse my RAID5 consists of 3 HDD. The CPU is suffcient for a non graphical Linux operating system and 4 GB of RAM is enough too, thinking of other NAS-systems which sometimes offer only 1 GB of RAM. Talking about the case i decided to get a Fractile Design tower, which offers a tool-free installation of the harddisks and is designed very well. But too bad this case wasn’t available in my area, so i got the Corsair tower which offers almost the same features and more than enough room for your harddisks. The cable management in this tower is very good, cause all cables including the cables for the harddisks are linked to the back of the tower. But you should get SATA cables with a bracket, so you close the towers back door. Assemling the hardware components was pretty easy since the tower cables are labeled. From the german site technikaffee.de i got the proposal to install openmeidavault on the NAS system, because of its webbased administration tools. But after 2 tries i stopped this, because i ran into a lot of “Connections errors” and sometimes the RAID wasn’t visible in the administration tool. So i installed Debian again, the new version 8.2 With a netinstall image a basic system setup on a SSD is done in a couple of minutes. Since the hard disks were working i didn’t have to setup a new RAID5, even though this would not lead into any problem with 2 [post id=213]backups[/post]. As you can see in the above screenshot the Celeron CPU is fast enough. In the past i didn’t use service like web or svn , but with the new system they should be available for the clients. With my old router with a alternative firmware supported local DNS but since i changed my [post id=782]internet provider[/post] i also got a new router. On the old system i installed bind9 as a local DNS service, on the new system i switched to the small dnsmasq which is easier to setup. To use the local DNS service i had to provide the IP adress of my NAS to the clients. But even changing this in the router setup leads into the IP adress ogf the router in the clients settings. Only manually setting the IP adress on a windows clients make the local DNS server available to the clients. This seems to be a severe bug in the routers firmware. Now the server is running with Samba for the Windows clients, EMail,SVN and Apache webserver. Scripts for backing up the system and svn were used from the old system. The server isn’t running all the time, the system shuts down if the main clients (my desktop pc and my laptop) are not active and starting the system is done via WOL. What do think about a DIY-NAS? I’m looking forward to your comments and questions.
OPCFW_CODE
I've been pleased with the results so far. The learning curve is steep, but the program rewards jumping in and thrashing about. Forum discussions are likely to contain basic questions answered by experienced programmers and there are some very excellent tutorials on YouTube and elsewhere for folks like me who learn best by observing. I have made exactly zero music with it so far. Part of this is that I have yet to figure out how to implement an idea I have that is especially well suited to Max; another part is that that project has fallen by the wayside as I have become entranced by another, more construction-oriented, project: the additive synth. Max's potential for additive synthesis is one of the big reasons why I decided to go with it: the size and complexity of what one can build is limited only by the processing power of one's computer. Plus, technically, additive synthesis is relatively simple, so it seemed like a good first project. Below is a screenshot of the latest iteration (#8). This version is based on adding sines in the overtone series. The instrument allows full control over amplitude and phase of each overtone, along with exponential stretching the overtone series (partials are raised to an exponent ranging from 1 to 1.125) allowing the 8th partial to be stretched relative to the fundamental more than an octave. The fundamental follows a MIDI tempered scale. What I am enjoying about this project is the appreciation it is giving me of Fourier analysis: how different aspects of how a sound is constructed change its timbre. Although this has so far been primarily a learning project, I am now at the point where I can begin to make a more practical instrument out of it. So what this means for my compositional productivity is that I'm probably not going to be producing much in the near future. However, as I gain skill with this new program, I expect it will be easier (and faster) to make music with it, so in the long run, I expect it to facilitate my output. In the medium-term, I have this idea and my work on the additive synth has helped me understand better how to implement it, so maybe soon. In the meantime, please lend your attention to some very interesting work by a musician I ran across last fall. All of his other work (that I've heard, anyway) is much closer to straight-ahead jazz, rock, even a little punkish, but this album is mostly about timbre -- and you know I'm all about that. *PS: For those who might be interested, there is an open source program that does essentially the same thing as Max: it's called Pure Data. I spent a fair amount of time considering it Pd, too, since, being open source, it's free. I went with Max because it has better support for non-programmers like me, but for those who are unintimidated by geekspeak and open source forums, Pd is a great program.
OPCFW_CODE
What advantages do "alt-coins" have over bitcoin? I'm struggling to see the importance of alt-coins. It seems to me that one of the major selling points of the various alt-coins is their quicker confirmation times. The common argument is: this is good for merchants and the like, because there is less time for them to worry that the customers' payments are legitimate. But can bitcoin users not have the same "luxury?" Why can't a merchant run his own node? His node will process the transaction, and the rest of the network can play catch-up. Another argument is that there are too few bitcoin, and that eventually bitcoin will be the gold of digital currency--what the big transactions happen in--and the alt-coins will be the silver, or the copper--for day-to-day use. Is that really even good in theory though? The price of bitcoin would need to be in the millions for that to even be a decent worry. I can possibly see it being an issue, but do we really need to consider it now, when the exchange rate for bitcoin hasn't seen more than a few months in the 4-digits? Am I overlooking a hyper-important advantage? closely related: Why do we need alternatives to Bitcoin? It depends a lot on the specific altcoin you're talking about. For the most part, there aren't a lot of major differences between Bitcoin and the alts. Sadly many of them exist only as the leftovers of pump-and-dump schemes intended to make their creators rich, though there are a few that are interesting in important ways. One of the primary benefits of a thriving altcoin economy is that they often represent the testing of a theory that may eventually benefit Bitcoin itself. Many altcoins, for example, use scrypt as their mining algorithm instead of SHA256. If SHA256 is discovered to be compromised some day, having a drop-in replacement for that subsystem available would be a major boon. You specifically mention faster confirmation times as a potential benefit, but the most often overlooked part of that statement is that transaction finality is a result of computational cost, not of time. This means that if an altcoin has one tenth the confirmation time and an equal amount of hashpower relative to Bitcoin it will take ten times as many confirmations to reach the same level of transaction security. Faster confirmations are a psychological benefit, not a technical one. The "silver to Bitcoin's gold" theory, largely pushed by Litecoin proponents, is also fallacious in that the entire reason gold requires a baser partner metal is that it is not adequately divisible. Paying for a candy bar with gold would be incredibly difficult for both parties involved since the amount of gold needed would be absurdly small. Bitcoin is not subject to such physical limitations and as such can be divided into such tiny portions relatively easily. Taking all of the above into account, it should be clear that while a handful noteworthy exceptions do exist, the majority of altcoins are intellectual curiosities at best and scams at worst. A few make meaningful changes, but thus far none of those changes has really been a meaningfully large improvement over the base system. And now I just need to wait for all the Litecoin and Dogecoin folks to come downvote me into oblivion. You and me both ;) Good point about the possible future issues of SHA256, and the possible replacement of a hashing algorithm first introduced by an alt-coin--I hadn't thought of that. You should look up Warren Buffet's reasons for not investing in Bitcoin.. something better could come along and take over and Bitcoin is still an infant. Personally, I'm convinced that something better is Nxt. It's written from the ground up, it's proof of stake which is something like 1000 times more energy efficient and 500 times as cost efficient and more secure, with almost instant transaction confirmations coming soon. All of which is important. There are other reasons, such as dogecoin having long term fixed inflation making it better for the long run. And just in general, it's better to have a few alternative currencies, so that if say the Bitcoin miners decide to gang up, they can't take out the world's economy. Also, seems very unlikely and Bitcoin has stood the test of time, but let's say a bug was found in the coin itself.. again, it's good to have alternatives. "...such as dogecoin having long term fixed inflation making it better for the long run." A quick Google search would suggest that's an area of contingency. It is indeed, but the people who like it, will stick with it. Those who don't, will switch coins. Point is there is a coin out there for those who feel that is important. A lot of purists and developers won't like this analogy, but look at toothpaste (or any other product). People like different flavours or brands, small differences, no matter if they do basically the same thing. I think that's beginning to apply to crypto, with each coin appealing to different types of people/communities/niches (a lot of people like a number of coins). This doesn't really apply right now though as it's a casino and many aren't really established 'alternative products'. From an economic/investment/trading perspective, first thing that comes to mind: diversification. But do we need diversification? Isn't the point of bitcoin to be the go-to-non-governmental choice for exchanging value or services? Doesn't diversification in this sense work against the theory of bitcoin? I don't know if we need diversification, but I think a major point of BTC was to take away the power to create and control money from 'you know who'. The fact he made it open-source and easy to re-create makes me wonder if he cared about the price/'monopoly' of BTC alone. If everyone and their mother can create cryptocurrencies and agree to use them over fiat, well then it doesn't matter how many there are. Edit: In fact it's better, harder to control 10,000 with 100 made a day than just BTC alone. Point taken. Albeit a bit philosophical. Obviously I won't deny I'd be biased as I support alts also (and 'run' my own - albeit quite low in cap) and don't own many BTC, but just something I thought worth considering when looking at it from a different perspective. Counterparty is not an alt-coin. It is an extension of Bitcoin functionality. Assets can be bought and traded using BTC in Counterwallet. The Counterparty currency XCP, is not mineable, limited to 2.6 mil forever, and is only an intermediate for many features such as escrow. It's more like a market for Bitcoin than an altcoin, I wish people would take the time to understand this...
STACK_EXCHANGE
Refactor build/broadcast fragments This issue will track the discussion around the refactoring of the Build and Broadcast fragments. @sunidhi64 you're in charge! Would you mind describing a bit more what you have in mind for the refactoring? I have looked up at the material alert dialog box, we can remove the the broadcast fragment and add a dialogue fragment instead. The dialogue fragment will ask the user to check if the address, amount and the fees inputted is correct or not. If it's correct, user can proceed to broadcast the transaction. After the above feature is implemented, we can think of adding variable fee rate? Yes agreed! I think the variable fee rate as a separate PR makes sense. The only thing I would say is that our problem with the Build/Broadcast fragments was that they were, indeed, two fragments. So one had to build the transaction (to check for the validity of the inputs), and then either toss it away and rebuild it in the next fragment (sort of what we do currently), or try to pass this pre-built transaction across the fragments. Passing it requires we serialize or parcelize it I think, and so that's another layer of complexity. I'd suggest we move to an AlertDialog that is within the same fragment as the Build fragment we currently have, so as not to have to parcelize and pass that transaction object across fragments. You can take a look at how we did it in Sobi Wallet (Material Alert Dialog that stays within the current fragment). Does that sound good? Or did you have another reason to try and build a new fragment maybe that I didn't think about? I searched about Material Alert Dialog and found that it works good within the same fragment until the phone`s configuration is not changed, like if we rotate our device then the dialog will not work. Otherwise we can have the AlertDialog in the the Build Fragment. I looked at how we did it in Sobi Wallet. Another thing what design should we implement for the dialog box? Yes that's true, a change in config will destroy the data unless we specifically add it to the bundle so it can be retrieved in the case of a lifecycle change. But in this case I assume it's not too likely, particularly because I have disabled the landscape mode (so users can't trigger a configuration change by switching from portrait landscape). They could of course change apps while in the dialog, and the OS might decide to kill the app, which would also loose the data, but I don't think it's too big of a problem at the moment. Also, if the user changes apps or kills the app or something, we would like them to have to go through the process of verifying again (so the build and verification happen very closely together, to limit errors). For design... what do you propose? I have a basic dialog I've been using for the very first fragment (where it says "Be Careful! you have to understand blablabla) and also for the "Hey we notice it's your first time opening this wallet would you like testnet coins blablabla). How do you feel about those? Good enough? I'm happy to see better designs too if you have other ideas. Yes, then we can proceed to add the Alert Dialog within Build Fragment. Yes the design is good but like if I want to make the sub-headings darker, can we do that? like send to address becomes bold. Yes I'm not sure about how to build a more custom one. Take a look at some of the docs from Google or maybe youtube videos and let me know what you find! You can start a PR with a very simple layout if you prefer, and clean it up later. Up to you! Sure! I will look up at the docs and start with the basic first I have created a draft pull request, please check the layout and the contents of the dialog box. I have used spannableString to create some changes to the string. This issue is done! Thanks to @sunidhi64. Next we'll be looking at making the AlertDialog into a Dialog for more flexibility in the UI.
GITHUB_ARCHIVE
Today’s topic is tricky, especially for people unfamiliar with the concept of indexes. Many of us use them implicitly when we create logical files, but do you really know what an index is and how to create one? In the previous TechTip, I explained how views are similar and, at the same time, different from logical files (LFs): views are easier to define and change, but there’s something that LFs can have that views can’t: keys. This brings us to the INDEX SQL instruction. If you’re not familiar with it, here’s what Wikipedia says about it: A database index is a data structure that improves the speed of data retrieval operations on a database table at the cost of additional writes and storage space to maintain the index data structure. Indexes are used to quickly locate data without having to search every row in a database table every time a database table is accessed. Indexes can be created using one or more columns of a database table, providing the basis for both rapid random lookups and efficient access of ordered records. An index is a copy of selected columns of data from a table, called a database key or simply key, that can be searched very efficiently that also includes a low-level disk block address or direct link to the complete row of data it was copied from. Some databases extend the power of indexing by letting developers create indexes on functions or expressions. For example, an index could be created on upper(last_name), which would only store the upper-case versions of the last_name field in the index. Another option sometimes supported is the use of partial indices, where index entries are created only for those records that satisfy some conditional expression. A further aspect of flexibility is to permit indexing on user-defined functions, as well as expressions formed from an assortment of built-in functions. In short, indexes are shortcuts to the data. But because an image is worth a thousand words, let me explain how an index works with two of them. Figure 1: Query over an un-indexed table Figure 1 shows what happens when a query is executed over an un-indexed table: Each row in the table is read, its column values are compared with the ones mentioned in the query, and the matching ones are selected. It’s slow, because the whole table will have to be scanned. Now let’s have a look at the same query and its behavior if there’s a usable index (that is, an index with the “right” key). Figure 2: Query over an indexed table Figure 2 depicts what happens if there is an index that can be used by the query. Instead of a full table scan, the query goes to the index first, which points only to the necessary rows on the table, thus dramatically decreasing the query execution time. Now that you’re up to speed on what an index is, let’s continue. Because your views can’t implement the ORDER BY clause, you need to create a view and an index to replace a keyed logical file, but an index is a more efficient access path than a logical file (LF). LFs can handle 8 Kb memory pages, while an index handles, by default, 64 Kb memory pages. However, you can specify the memory page’s size when you create the index; its range can vary between the LF’s 8 Kb and 512 Kb. This means that indexes can have a better performance than LFs by far. Note that indexes with larger logical page sizes are typically more efficient when scanned during query processing. Indexes with smaller logical page sizes are typically more efficient for simple index probes and individual key lookups. In practice, a larger memory page size represents a significant performance gain because more data is handled at a time, reducing the disk access frequency. The INDEX syntax is very simple: CREATE INDEX <schema or library name>.<index sql name> FOR SYSTEM NAME <index system name> ON <schema or library name>.<table name> There are four types of indexes: - The “regular” index doesn’t require any additional keyword and creates an access path like the ones you know from the keyed-not-unique LFs. - The “unique” index prevents the table from containing two or more rows with the same value of the index key. When UNIQUE is used, all null values for a column are considered equal. For example, if the key is a single column that can contain null values, that column can contain only one null value. The constraint is enforced when rows of the table are updated or new rows are inserted. The constraint is also checked during the execution of the CREATE INDEX statement. If the table already contains rows with duplicate key values, the index is not created. - The “unique where not null” index is similar to the unique index but doesn’t consider all null values as equal. In other words, multiple rows containing a null value in a key column are allowed. - The “encoded vector index” is used by the database manager to improve the performance of queries. However, it cannot be used to ensure the ordering of rows. There’s a whole chapter on this topic in the Database Performance and Query Optimization manual. For instance, a “regular” index over the inventory master table defined with this table’s primary key would look like this: CREATE INDEX MYSCHEMA.IDX_INVENTORY_MASTER_MAIN FOR SYSTEM NAME I_INVMST01 (WAREHOUSE_ID ASC, SHELF_ID ASC, ITEM_ID ASC) The last line of the statement contains the key expression—the names of the columns that compose the key and the ASC reserved word, meaning that the data is sorted in ascending order. If I wanted any of the columns to be sorted in descending order, I’d replace ASC with DESC. Indexes can also define unique key constraints. Here’s an example of a unique index, enforcing a unique key over the warehouse master table: CREATE UNIQUE INDEX MYSCHEMA.IDX_WAREHOUSE_MASTER_MAIN FOR SYSTEM NAME I_WHMST01 I’ll revisit these DDL topics in a later TechTip and discuss how you can convert your physical and logical files to their SQL counterparts. In the next one, I’ll continue the DDL discussion with an extremely useful but not-very-well-known instruction: ALIAS
OPCFW_CODE
We are happy to present the FinDock September '20 Release! Our release communication aims to inform you early of upcoming releases. Please keep in mind that the scope of a release is subject to change prior to the release date. Release notes are updated accordingly until the release date. - Release to Sandboxes: August 23, 2020 - Release to Production: September 06, 2020 FinDock now supports the new Enhanced Recurring Donations feature of NPSP. To make FinDock compatible with Enhanced Recurring Donations, several new fields were added to the mappings between FinDock & NPSP. For more information on Enhanced Recurring Donations, please visit the Salesforce Power of Us website. FinDock now is compatible with both the Classic Recurring Donation and the Enhanced Recurring Donation. There’re no special settings needs for FinDock, based on the configuration FinDock understands the version that is applicable. Issue: In some cases when a debit transaction was processed by Guided Matching, the payable Installment was changed to paid, but the NPSP Opportunity was not updated and remained in pledged state. Solution: Unlike in Manual Review, Guided Matching did not create an NPSP Payment when processing debit transactions. This led to a status mismatch between the Installment and Opportunity. We have now implemented the same processing logic in Guided Matching that is used in Manual Review to ensure Opportunity Stage is changed to Posted (or the status that has been mapped) when the status is changed to Paid. Further, we found that the status Paid was not listed as an option in the mapping between Installment Status to Opportunity Stage. We have added Paid now as a new mapping option in Status Mapping. Issue: If the Generate on Recurring Level option was enabled for the payment method reference, single Installments with that method did not have the Payment Reference. Solution: We fixed the code to ensure the Payment Reference will be generated for single Installments. Issue: In some cases, when a bank file is uploaded to Chatter for FinDock to process, the file extraction in ProcessingHub is blocked by UNABLE_TO_LOCK_ROW errors, causing the extraction to fail. Solution: There are multiple reasons this error can occur. However, in each case, the error is temporary. To resolve this issue, we have implemented new processing logic that re-initiates file extraction after a few minutes when a row locking error occurs. Issue: Deleting mandates could fail in ProcessingHub if the mandate schedule close steps start before all deletions are completed. Solution: We modified the order and timing of processing steps to ensure the mandate deletions are processed first before moving on to generation and closing steps. Issue: When a mandate is manually deleted in Salesforce, the corresponding mandate record in ProcessingHub was not deleted. This could result in conflicts when, for example, parsing an Inbound Record lead to an attempt to modify or delete a mandate that was already removed in Salesforce. Solution: The issues was caused by an error in the data sync between Salesforce and ProcessingHub. We have updated the syncing procedure to include the delete action on Mandate in Salesforce. Issue: A bug was found in ProcessingHub data processing of Gift Aid claims. This bug appeared when generating Gift Aid claim files for batches with more than 500 records. It affected claim batches from January '20 onward and resulted in incorrect claims. Solution: The root cause was in a query on the ProcessingHub which we have fixed in this release. All impacted customers were identified and contacted directly. We have generated updated claims files and are working with customers on the adjustments. We sincerely thank everyone involved for their cooperation. Issue: Axerve provides support for redirecting customers to a custom checkout page of your own design instead of using Axerve’s default checkout page. Solution: We have added a new setting to the Axerve for FinDock extension Custom Checkout URL. If you have your own checkout page, you can enter the URL here to redirect your customers to your page. Issue: To reconcile Bollettino Postale files, FinDock uses the 18-character Quarto Campo payment reference. In certain files, Poste.it, sends the reference without the last 2 check- digits, resulting in a 16-character reference which FinDock does not accept. Solution: To be able to match files with 16-character references to the 18-character references stored in Salesforce, we made a new Guided Matching rule that adds the 2 check-digits to the 16 character reference before the matching process starts. This ensures that the 16+2 digit payment reference from the file and the 18 digit payment reference stored in Salesforce are the same format and can be matched. In Italy banks may use an alternative pain.002 standard for status reports called CBI pain.002. To further support our Italian customers, we have added the XSD validation required by CBI to our bank file processing. This validation is in addition to our already existing SEPA pain.002 validation. When a CBI pain.002 file is uploaded to Chatter, FinDock runs the specific validation and rejects the file (with a validation error message) if it does not conform to the CBI standards. FinDock now handles transaction types, families, transaction & categories as per CODA standard 2.6, section “3. CODING OF THE TRANSACTIONS”. Practically, this means that: - From transaction families (Type 2, 3) we create Transactionrecords in FinDock with Entry Type= ‘Entry’ - From categories (Type 6, 7, 8) we create Transactionrecords in FinDock with Entry Type = ‘Entry Detail’. (Type 9, detail of 7 is not supported) This way you can choose whether you want to handle these transactions on the family = totalisation level, or on the detail level. Taking into account the differences in CODA files across banks, this functionality has for now only been enabled for one specific file and bank. If you run into similar file structures, please do not hesitate to contact us! Issue : Generated Stripe API keys can be up to 255 characters. However, FinDock would only allow for 80 characters. This would prevent you from configuring the Stripe for FinDock Payment Extension. Solution : FinDock now allows for up to 255 characters in the Stripe API key field. This is the maximum possible number of characters. Some Swiss banks require a customer / organisation prefix in front of ESR LSV references. To add a prefix to your ESR LSV references: - Go to Custom Settings. - Press Reference Settings. - Press Manage. - Press the top New button to create an organization level value. - Enter your organizations prefix into the ESR Reference Prefixfield - Press Save. The provided prefix will now be prepended to all ESR LSV references upon creating Installments and when using the Payment Request Generator with the 'standard' ESR generation. FinDock only supports one prefix at this point! When a Tikkie is paid, FinDock receives a notification from Tikkie. For Tikkie amounts below 1 Euro the extraction of the amount was not done correctly and failed. To decrease the probability of this occuring we lowered the limit to 10 cents. Issue: A transaction Id is included in the return message from Worldpay for recurring payments, but this Id was not added to the payment record in Salesforce. Solution: The transaction Id is now added to the payment record when the installment is collected. To improve consistency and aid in reconciliation of Worldpay transactions, FinDock will now always use the InstallmentId as Worldpay order code and Payment Transaction Id on: - One-time payments (Installments records) created through the API - First payments (Installments records) used to authorize Recurring Credit Card payments - Installments created with Payment Schedules to collect Recurring payments InstallmentId set on WorldPay order code is unique and be controlled from FinDock, making it more convenient for matching than the payment reference, that was only available to FinDock for one-time payments (including first payments) through the API flow, is no longer used or stored by FinDock. Issue: Notifications from Worldplay to FinDock were arriving multiple times due to a missing confirmation message from FinDock. Solution: We have implemented the expected confirmation response so notifications only arrive once.
OPCFW_CODE
Can I use SpriteBatch when drawing sprites on a rotating 3D plane I'd like to have my plane of sprites rotate similarly as in this game (YouTube Video). So basically, everything is drawn in 2D, but the 2D plane is then "rotated in 3D" (or camera is rotated). When rotated, sprites further away should look like they are further away. Question: Can I use SpriteBatch with this? IIRC SpriteBatch uses Orthographic view matrix, so probably not? If I can, how? If not, how would you suggest I draw the sprites? PS. I am really bad at explaining problems so feel free to ask any questions. I fixed up your question to mention that you're rotating the 2D plane of sprites - as per the video, rather than rotating individual sprites. Yes! Let's have a look at how SpriteBatch works: SpriteBatch draws sprites onto a 2D plane. So if you had wanted to rotate your sprites so that they come "off" this 2D plane, then you'd probably have to find another solution. But, looking at your video, it looks like you want to rotate the entire 2D plane. SpriteBatch's normal mode of operation uses a built-in orthographic projection matrix into client space. It also lets you pass in an optional global transformation matrix. You could use this global transformation to get your effect, but you'd also have to "pre-undo" the projection matrix that comes afterwards, and the maths is annoying. But fear not - SpriteBatch has another mode of operation, where it can use any Effect, using that effect's vertex shader and transformation matrices. BasicEffect will do nicely here. Take a look at this article to get the basic principles. Simply set BasicEffect.Projection matrix to a suitable perspective projection. And then set BasicEffect.World matrix so that your plane of sprites appears in the correct location (and the right way up!). Take a look at this article as well. It describes the process in more detail. Just to add a note for those needing to write a custom vertex shader for SpriteBatch - on the second linked article, there's a comment by Remi Gillig which might be helpful too. But in this case you won't need a custom vertex shader since just a regular BasicEffect instance with the correct parameters will do. Can I use SpriteBatch when drawing sprites with 3D rotation I'm afraid not. You can however render the sprites you want to rotate together to a RenderTarget2D. Then skin a quad with the previously mentioned RenderTarget. This describes the skin and draw a quad part. Edit: watched the video on my phone :). Anyways to achieve a similar effect you just need 1 quad and one render target. Draw you whole scene to a rendertarget and apply it to a single quad. you can then either move your camera or quad, whichever you find more convenient. "and then draw it using an orthogonal camera." I thought I had to use normal ( perspective ) -camera? I want sprites futher away to actually look like they are furthe, like in that video. I thought that isn't possible with orthogonal camera :| My bad I can't view youtube at work :/ and was assuming that you were going for a super paper mario effect (2d camera w/ 3d sprite transformations). But anyways you can use what ever type of camera you like. "Technically", no. SpriteBatch exists for 2D sprites. However, you can supply transformation matrices to the SpriteBatch, and unless I'm mistaken this allows you to skew and stretch your sprite to your heart's content. You would, of course, have to manually calculate everything, but you could fake 3D by clever transformation of 2D. Can you represent skew using a XNA matrix? I don't think you can. http://www.senocular.com/flash/tutorials/transformmatrix/ My bad I meant taper and you can't. "Things they cannot do include tapering or distorting with perspective" So using the transformation matrix isn't going to work. Excellent reference though. I appreciate you sharing it. @ClassicThunder Actually, you can. That reference Superbest linked describes 2D affine transformation matrices. XNA's matrices are 4x4 matrices and should be able to perform tapering operations. (You should be able to cancel out the built-in ortho matrix completely - see my answer. But I'll leave the maths for someone else ;)
STACK_EXCHANGE
There should be something like that for Crystal, too! The technical part should not be too complicated, the playground already has most of the functionality for interactive coding and it could easily be integrated into an HTML site with the tutorial texts and example code. A static site would probably be sufficient for that. Creating a great tour guide will be much more challenging (as everything with documentation... 😄 ). The Gitbook offers a good starting point, and the Go tour could be a guide, as well as similar resources (I've found https://www.learnrubyonline.org/ so far, which also exists for some other languages). http://tryruby.org/ is another great one that I have shown to multiple people to get them started with Ruby (and even programming in general). I think it's a great idea and I'd say go for it! Count on us to link it from crystal-lang.org once it begins shaping up, setting a subdomain for it, etc I just stumbled upon https://github.com/crystal-lang/crystal-presents which has the technical foundation already covered! And looks pretty nice, too, @bcardiff a very good "tour" for me was the ruby koans these could probably relatively easily be transferred to crystal although I do not know how much they depend on rake specifics (nothing prevents crystal from using ruby rake though) @GloverDonovan - For a beginning outline, the talk that @asterite gave (maybe a year or two ago) would be a good starting point. I have lost the link to the video, but I'm sure someone has it or maybe @asterite can provide the slides or code used for the presentation. @marksiemers Thanks for the reply. I plan to have one main idea covered in each section, such as a section on modules and another on classes, each with their own individual subsections / exercises. A link to the video would be great; I'd appreciate some help on an outline. I want to try test-driven learning (i.e. run a spec file every time someone submits their code) so that end-users can see exactly how their code behaves when given a few test cases. One roadblock for me though is that the crystal playground isn't sandboxed by default (i.e. you can access system files, etc.). I'm not sure how I'd fix this for a production server. Did some testing and it looks like the Crystal play website uses playpen to sandbox things. How reliable has this been so far, and is there any reason to prefer playpen over firejail? Here is the video: https://vimeo.com/191351066 I want to try test-driven learning (i.e. run a spec file every time someone submits their code) so that end-users can see exactly how their code behaves when given a few test cases. we have carc.in The source for that is at https://github.com/jhass/carc.in and it uses playpen, as @GloverDonovan already mentioned. I'm sure we can find a way to either run the Playground in a sandboxed environment as well - or just simply use carc.in as a backend for a publicly hosted playground. In fact, the technical implementation is really not important right now. The main issue is putting together great content. For now, we can just use the playground as is and only run it locally. I agree with @straight-shoota, getting the content together should be the highest priority - even if that means only running locally (similar to running go's tour locally). In the future, I like some of the features that a local Crystal play server provides over carc.in Showing the variable values and types for each line of code Having a modal window for loops, so you can see what happened for each iteration I also like the workbook functionality, but I think that is not as applicable to an online version unless crystal wants to build their own repl.it The content should largely be written long-form tutorials. If there is a walkthrough interactive thingy it should come after the "book-style" thing. Long-form text is one thing everyone can agree on, I know that I would skip any interactive tutorials I found and search for an introductory text. Others are different but the basics please first. I've done a lot of programming tutorials online, but just want to say I really enjoyed Haskell's. The words they use make the experience tens time better and increase the developer's confidence (which inherently increases the developer's learning ability). Examples: First Time's a Charm Well done, you typed it perfect! Hi there, chris! That's a pretty name. Honest. You're getting the hang of this! Great, you made a list of numbers! If you win we'll split the winnings, right? I'm not saying we need to have this exact level of happiness during an interactive tour, but I just want to re-iterate that words of encouragement can really make a difference. I would love to work on this! Great, feel free to check out forum.crystal-lang.org/t/interactive-tutorial/3019 and crystal-lang/crystal-book#484
GITHUB_ARCHIVE
You can create Operations to be run based on any data that changes within your app. For example if a new event is created by a user, like a "New Order", or if any data within your app is updated, like payment balance, or order status, etc. In this article, we create an API Operation for one of our default learning apps -DemoApp, which you have in your account. This Operation will update one of the object's data fields using the Filer-Trigger-Operation sequence. You can also reference Triggers article, if needed. Go to API Operations (GREEN MARK) section on the Top Menu of Mobsted platform Click Add Operation (RED MARK) Click Mobstedv8 (RED MARK) to see the list of available API methods. Choose 1.6 Updating Object data (GREEN MARK) Name your new operation (ORANGE MARK) Choose Available Key in API Key drop-down (GREEN MARK) Fill in all required fields (RED MARK ↓). In this example we have only one required field, another API method can have other fields. As a value in the field you can use static data (number, string, etc.) or Hashtags as a dynamic data source (in the Hashtags article you can learn what the #application:id# reference does). Click Add Field = Value pair and fill in the fields (RED MARK ↓), click Save Fill in the new field Address (it is the name of the object's columns in DemoApp) with any info you want to be saved into that column by the Operation. Click Save Operation (RED MARK ↓) Close Operations window The operation is ready to use in the trigger or in the mobile app as Action. Configure Filter and Trigger Now, we need to setup filter and trigger to add the Operation we made above, which will be automatically run by our app. Click Menu (RED MARK) and select Objects (GREEN MARK) Click Add filter (GREEN MARK ↓) Click Create New Filter (RED MARK ↓) Configure your new filter. Choose ActionName in events section (RED MARK ↓) and equals + Water Delivery Order in value section (RED MARK) Click Save Changes in Filters tab (RED MARK) and go to the Triggers tab (GREEN MARK) Click Create new trigger - Configure your new trigger - Name your trigger (RED MARK ↓) - choose Events in trigger scope (GREEN MARK) - choose Automatic in trigger mode (ORANGE MARK) - choose Instantly in On new data, appearing (BLUE MARK) - click Start automatic mode (Red button) Choose UpdateAddress operation in the drop-down and click Add Operation Close filter's window Done, now let us check how it works. Checking the result You are in the Objects section. Open the app for any object (RED MARK). Make sure that all cells in column Address are EMPTY (GREEN MARK). To open app in your desktop browser(GREEN MARK) or in your mobile - scan QR (RED MARK) Go to Water delivery page of the Demoapp and make an order. In few seconds trigger will check the new event with ActionName = Water Delivery Order and execute the chosen API operation (UpdateAddress). This ActionName is assigned to a button in the Demoapp. If you refer to point 11 above - we did set up Filter to look for it. Go to the Objects in DemoApp and find the updated cell in your objects' column.
OPCFW_CODE
Vsphere webclient slot grootte Once you install vSphere Web Client, point the browser on your Web Client server to the vSphere Web Client Administration Tool, located at https://localhost:<9443 or whatever port you chose>/admin-app/. Next, register the vCenter Server or Servers with the vSphere … Starting with VMware vCenter 5.0 there is a web client that let you manage your virtual infrastructure. Now with vCenter 5.1 that client is so enhanced that you can do all the stuff like you would in the vSphere client, and even more. For example if you want to deploy VMware vSphere Data Protection, you can't use i VMware introduced the vSphere Web Client as an administrative interface for vSphere with version 5.1. Here are some tips for those who are still learning how to use the new interface. Note: vSphere Web Client SDK is deprecated, and vSphere 6.7 is the final generation of releases for the vSphere Web Client SDK and Programming Guide. Certification for plug-ins written with version 6.7 of this SDK will continue to be valid. Develop your plug-in with the vSphere Client SDK 6.7 and its associated Programming Guide. 3/26/2012 1/11/2017 When you select the Host Failures Cluster Tolerates admission control policy, view the Advanced Runtime Info pane that appears in the vSphere HA section of the cluster's Monitor tab in the vSphere Web Client. This pane displays information about the cluster, including the number of slots available to power on additional virtual machines in the Security best practices dictate that you keep these disabled--hence, the warning messages. Administering or managing your ESXi host from the command line should be done using the vSphere Management Assistant appliance (vMA) or by installing PowerShell. Among the other VM’s in the cluster, then the slot size for memory is 8192 MB and slot size for CPU is 4096 MHZ. In the cluster. If no VM level reservation is configured. Jun 02, 2015 Solution: Just checked my vSphere Web Client.The Host shows what slots are populated. Never noticed it before, but it is there. If an earlier version of the vSphere Web Client is installed, this procedure upgrades the vSphere Web Client. Note: vCenter Server 5.5 supports connection between vCenter Server and vCenter Server components by IP address only if the IP address is IPv4-compliant. Vmnomad Feb 16, 2016 at 11:20 UTC30 Sep 2013 - 5 min - Uploaded by VMware Tech Pubs. what a slot is, how it is calculated, and how it affects your vSphere HA . vSphere HA Slot . VM Casino Slots for Androidvmware ha slot size explained; Vmware Vsphere 5.1 Recently somebody asked me a question about VMware vCenter running on a Windows Server. The Windows Server was running VMware vCenter 6.5 and in case of a datacenter related problem, they wanted to get access to the vSphere Web Client (Flash) on the system locally. It sounds easy right…? In vSphere 6.0 (yes, vSphere 6.0 is required), the vCenter Single Sign-On login page is now written using regular HTML and CSS. This means you can actually now customize the login page with your own logos, colors or text that you wish to display to your end users. The vSphere Web Client provides extension points to let you add Flex elements to the existing Getting Started, Summary, Monitor, Manage, and Related Objects tabs for each type of vSphere object, such as a host, virtual machine, or cluster. These data view extensions are displayed as sub-tabs or sub-tab views in the object workspace hierarchy. Vmnomad Feb 16, 2016 at 11:20 UTC30 Sep 2013 - 5 min - Uploaded by VMware Tech Pubs. what a slot is, how it is calculated, and how it affects your vSphere HA . vSphere HA Slot . VM Casino Slots for Androidvmware ha slot size explained; Vmware Vsphere 5.1 In vSphere 6.0 and later, the vSphere Web Client is installed as part of the vCenter Server on Windows or the vCenter Server Appliance deployment. This way, the vSphere Web Client always points to the same vCenter Single Sign-On instance. VMware HA Slot is the default admission control option prior to vSphere 6.5. Slot Size is defined as the memory and CPU resources that satisfy the reservation requirements for any powered-on virtual machines in the HA cluster.This article is just to cover how the HA slots are calculated step by step not more than that. The vSphere HTML Client SDK Fling provides libraries, sample plug-ins, documentation and various SDK tools to help you develop and build user interface extensions which are compatible with both vSphere Client (HTML5) and vSphere Web Client. The vSphere Client (HTML5) released in vSphere 6.5 has a subset of features of the vSphere Web Client (Flash/Flex). Until the vSphere Client achieves feature parity, we might continue to enhance and/or add new features to vSphere Web Client. In vSphere 6.5, we have made significant improvements to enhance the user experience of the vSphere Web I have an issue with the order in which the vCenter servers are displayed in the vSphere webclient on a fresh installed 6.5 environment. Here is how I set it up: I have 2 sites, siteA and siteB. First I deployed a PSC in site A and created an SSO domain. Then I deployed a vCenter (appliance) in Site A and linked it to the SSO domain. Sep 08, 2020 · My vCenter appliance running 6.0 tripped and fell over Sunday night. Called VMWare for support and spent 4 hours working through all it's well-known issues. * Out of disk space on / and other part of where they put logs. - Mardi gras casino blackhawk colorado - Dat alles glinstert gokautomaat download - Fruitmachines met de meeste gratis spins - Beste en slechtste kansspelen - Poker spelen op habbo - Gratis slots zonder geld te downloaden - William hill casino bonus inzetvereisten - Casino sportweddenschappen bij mij in de buurt - William hill mobiele casino club - Full tilt poker android dinero real - Poker scenarios wat te doen - De vier koningen casino en slots gids - Toverstaf van wilde magie - Niet-komisch 8 casinokoningen deel 2 vol - Texas holdem offline android apk - Casino el camino amarillo burger - Bally slot-apps voor android - Pro boat blackjack 9 brushless
OPCFW_CODE
Is the statement "UTC is not a time zone" in reality wrong? Technically and strictly speaking, the statement is not wrong. UTC is a standard, not a timezone (as you already linked). A timezone corresponds to some region in the world and has lots of different rules regarding that region: - What's the UTC offset (the difference from UTC) when it's in Daylight Saving time and when it's not - When DST starts and ends - All the changes in offsets and DST this region had during its history Example: in 1985, the brazilian state of Acre had standard offset UTC-04:00 during DST), then in 1988 it was on UTC-05:00 without DST, then in 2008 the standard changed to UTC-04:00 (and no DST), and since 2013 it's back to UTC-05:00 and no DST. While the timezone keeps track of all of these changes, UTC has no such rules. You can think of UTC in many different ways: - a "base" date/time, from where everybody else is relative to - this difference from UTC is called "offset". Today, São Paulo is in UTC-03:00 (the offset is minus 3 hours, or 3 hours behind UTC) while Tokyo is in UTC+09:00 (offset of plus 9 hours, or 9 hours ahead UTC). - a "special" timezone that never varies. It's always in the same offset (zero), it never changes, and never has DST shifts. As the "offset of UTC" (not sure how technically accurate is this term) is always zero, it's common to write is as UTC+00:00 or just Another difference between UTC and timezone is that a timezone is defined by governments and laws and can change anytime/anywhere. All the changes in Acre described above were defined by politicians, for whatever reasons they thought at that time. (So, even if a region today follows UTC in their timezone, there's no guarantee that it'll stay the same in the future, and that's why even those regions have their own timezones, even if they look redundant). But no matter how many times politicians change their regions offsets, they must be relative to UTC (until a new standard comes up, of course). Now, when you see implementations like TimeZone.getTimeZone("UTC"), you can think of it in 2 different ways: - a design flaw, because it's mixing 2 different concepts and leading people to think they're the same thing - a shortcut/simplification/nice-trick/workaround, that makes things easier (as @JonSkeet explained in his answer). For me, it's a mix of both (fifty/fifty). The new java.time API, though, separates the concepts in 2 classes: ZoneOffset (actually both are subclasses of ZoneRegion is not public so actually we use - if you use a ZoneId with IANA timezones names (always in the format Europe/Berlin), it will create a ZoneRegion object - a "real" timezone with all the DST rules and offset during its history. So, you can have different offsets depending on the dates you're working with in this - if you use a +03:00 and so on), it will return just an object that represents an offset: the difference from UTC, but without any DST rules. No matter what dates you use this object with, it'll always have the same difference from UTC. ZoneRegion) is consistent with the idea that offsets in some region (in some timezone) change over time (due to DST rules, politicians changing things because whatever, etc). And ZoneOffset represents the idea of difference from UTC, that has no DST rules and never changes. And there's the special constant ZoneOffset.UTC, which represents a zero difference from UTC (which is UTC itself). Note that the new API took a different approach: instead of saying that everything is a timezone and UTC is special kind, it says that UTC is a ZoneOffset which has a value of zero for the offset. You can still think it's a "wrong" design decision or a simplification that makes things easier (or a mix of both). IMO, this decision was a great improvement comparing to the old java.util.TimeZone, because it makes clear that UTC is not a timezone (in the sense that it has no DST rules and never changes), it's just a zero difference from the UTC standard (a very technical way of saying "it's UTC"). And it also separates the concepts of timezone and offset (that are not the same thing, although very related to each other). I see the fact that it defines UTC as a special offset as an "implementation detail". Creating another class just to handle UTC would be redundant and confusing, and keeping it as a ZoneOffset was a good decision that simplified things and didn't mess the API (for me, a fair tradeoff). I believe that many other system's decide to take similar approaches (which explains why Windows have UTC in the timezone list).
OPCFW_CODE
Is it possible/what happens when you clear the pool table? In Duke Nukem Forever, right near the start of the game before you get in the lift, there is a pool or snooker table. I have been trying for ages now to clear the table by sinking all the balls, and am getting fed up, but, the closest I can get is one ball away before potting the white and the game restarting. I am guessing it is possible, but does anyone know what happens? Since it doesn't award an achievement, it's likely to be an Ego boost, if anything. Can't say for sure, because I haven't done it yet... Right, after what must of been around an hour of trying, I finally managed to do it! Duke says "Too easy" The twins say "You're the best, Duke" And, Lunboks was correct, you get +2 ego. This early on in the game, I am not sure how valuable 2 Ego is, but, I can tell you this was the most boring waste of time! I have heard the "s***" word probably close to 100 times and whilst funny... it gets so boring! To anyone else trying, a little hint is that if you think the white ball is about to go in, try hitting it again as you do not need to wait for it to finish, and if it goes too fast, instead of going in, it just bounces again. There are spoiler tags, but this only answers your question, so they're hardly necessary. @lunboks ahh, ok, no problem! Just didn't know the policy here and didn't want to annoy anyone! All you get is ego boost for completing it but let me answer your question on how to do it. I've completed this game multiple times every single time I spend at least an hour or two an hour and a half trying to play it by eventually giving up when I get back to it what I do is you first start out hitting the ball in it knocks them all into a line it depends on where you're standing how far away you are from it and where your crosshair is I've had multiple shots where I was able to hit it and knock it in or bounce off by the broken physics it all depends on how or where you're standing I've been down to one ball knocked the one ball in but also Walk to Walk knocked in the white bowl so there is a way to do it you just have to have patience necessarily I don't at the moment but if you need any more help I'm more than happy to help you I might just do a video explaining the physics and how you're supposed to play this it's all based on aiming like where your acrosser is how far away like in real pool Hi dumpsterman, can you please add punctuation?
STACK_EXCHANGE
GitHub Copilot is an AI-powered code assistant developed by GitHub in collaboration with OpenAI. It uses machine learning algorithms to suggest code snippets or entire lines of code to developers as they write code in various programming languages. Copilot is designed to help developers write code more efficiently and accurately, by providing suggestions based on the context of the code being written, as well as by analyzing existing codebases to identify common patterns and best practices. Copilot is integrated directly into popular code editors like VS Code, allowing developers to quickly incorporate suggested code snippets into their projects. Increased productivity and efficiency - GitHub Copilot can save developers time and effort by generating code snippets and suggestions in real time. This allows developers to focus on higher-level tasks and spend less time on repetitive or mundane coding tasks. Reduced time spent on repetitive tasks - By automating certain coding tasks, this tool can significantly reduce the amount of time developers spend on repetitive tasks, such as writing boilerplate code. Improved code quality and consistency - GitHub Copilot generates code suggestions based on established patterns and best practices, which can help ensure that the resulting code is high-quality and consistent. Better code completion - GitHub Copilot can help with code completion, suggesting common methods, functions, and variables to improve the accuracy and completeness of the code being written. Access to a vast knowledge base - GitHub Copilot has been trained on a massive dataset of publicly available code, which means that it has access to a vast knowledge base of code snippets, functions, and best practices and more! The GitHub Copilot subscription is available on a monthly or yearly cycle. If you have an active GitHub Copilot subscription, and are then assigned a seat as part of a GitHub Copilot for Business subscription in GitHub Enterprise Cloud, your personal GitHub Copilot subscription will be automatically canceled. You will receive a prorated refund for any remaining portion of your personal subscription's current billing cycle. You will then be able to continue using GitHub Copilot according to the policies set at the enterprise or organization level. A free subscription for GitHub Copilot is available to verified students, teachers, and maintainers of popular open-source repositories on GitHub. Sign up at Z2U.com (or log in if you already have an account) 2. Browse through the GitHub Copilot Account page and look for an offer that suits you 3. Once you select an offer, the seller will be notified so he/she can deliver the account to you (Note: Depending on whether the seller chose automatic or manual delivery, you’ll either receive the account instantly or you’ll need to work out the details with the seller) 4. As soon as you receive the account, check it to see if everything is as described in the offer 5. Make the confirmation to our system so the seller will get paid.
OPCFW_CODE
Abstract: Service Oriented Business Application (SOBA) is a new choice to build the Enterprise Applications using SOA concept. In this article, we provide a practical, no-hype introduction to this emerging trend. We will discuss why large Enterprise is going behind this approach. These articles contain some lesson learned while developing large-scale financial enterprise SOBA. Why large enterprises going behind SOBA More and more business across the world are implementing IT systems that span organizations within and outside the enterprise. These systems face a common challenge - to find a solution that is easily evolvable, scalable, extensible, cost-effective, and fits well with their existing legacy systems. Traditional software architectures are ineffective in meeting these demands - especially for systems that span organizational boundaries. SOA addresses these challenges by creating a paradigm with which reusable services are designed, developed and deployed independently and assembled together through standards-based communication protocols. Financial SOBA application on SOA Tavant has built a large scale mission critical financial (Loan Origination) application for a leading mortgage company in the USA. The mortgage company had a legacy system to originate loans for its customer that was unable cope with the rate of change required by the business, and unable to satisfy the increasing volume of transactions and the need to have 24X7 uptime. Tavant engineered a cutting-edge Loan Origination solution using SOA that solved all these challenges. The application is designed as a set of independent services that coordinate with each other to implement business processes. The new architecture is solved most of the existing issues. As each service is independent so we can easily upgrade a service as per changing business requirement without disturbing whole application. The release and deployment life cycle of a service is independent from each other so over all maintenance time is reduced for the whole application. We can scale the each service separately depending upon the load, so we can optimize the usage of hardware requirement for the whole application. Learning from Real experience Here are a few learning from real experience to follow when building SOBA on service-oriented architecture: 1. The best way to build a large SOA system is to extend a small SOA system that works. 2. Business should drive the process of identifying the services. We need to make sure a service always served bigger business goal though it developed and designed independently. 3. While identify a service make sure that its defined at the right level of granularity and should not be fine grained as this will result in a tightly coupled, brittle infrastructure. 4. We need to consider carefully whether web services, RMI or Asynchronous communication make sense for your application, do not go behind hype. 5. The use case where transaction needs to be propagated from one service to another and which creates a long transaction, its better to use Asynchronous communication to shorten the transaction. 6. If Asynchronous message communication is used, make sure that each service should capable to handle the issues of out-of-order message, poison message (bad format message) and duplicate message. 7. Each services should be build such a way that their release and deployment can be independent, should not be depend on other else it will loss the flavor of SOA. 8. Using of standardized message format for exchanging data between each service is a better approach to integration. For example, using OAGIS Business Object Document (BOD) which is based on XML Schemas. 9. As services are independent in nature and so its underlay data should be kept separately without depending on other services. So it is better to design the data base such that each service has its own schema. 10. The services developed from large-scale application make sure that each service identifies a record of other service through the virtual foreign key (a logical foreign key- as database can not maintain foreign key between two schemas). 11. Try to reuse the legacy logic before replacing it. 12. While building a SOA application always consider External Service Provider (ESP) or the legacy application which is connected to the different services. 13. In event driven SOA where back-end services (component of the service where the messages are processed) drive events. Thus, the designer & developer must be prepared to handle unreliable services, ensure data integrity, resubmit requests, and manage transactions. Conclusion It includes some learning from the experience that can be a good help to developer who are new to SOBA.
OPCFW_CODE
I am working on a Rhino Python script, intended to prepare a file for cnc cutting. I know there is dedicated software to do this, but I would like to keep it within Rhino, just using scripting, no plugins. To start I have a set of surfaces, from which I extract the edge curves and offset those by a fixed distance. That is actually how far I have succeeded, including getting he offset in the correct direction. Now, I need to create gaps in the curves I have generated, to hold the parts within the base material sheet. I want this to be initiated by the user, with a single click on the relevant spot on the curve. I have come as far as getting the curve ID, and the point coordinates, but I have run out of ideas on how to gap the curve. All CurveBoolean operations seem to work on closed curves only, but after creating one gap it will no longer be closed, so I have stopped investigating that route. SplitCurve and TrimCurve seem to require curve parameters, but I cannot think of a way to generate those from a user defined point. Does anybody have any ideas, hopefully with a concrete reference to the Rhinoscriptsyntax to be used? Divide the offset curve into the number of points you want to have ‘bridges’. Create circles using those points as centers, the diameter will be approximately the width of the bridge Intersect the circles with the offset curve and collect all the parameters on the offset curve of the intersections. You will need something like rs.CurveCurveIntersection() for this. If you have never worked with the result of an intersection before in scripting, it’s a bit complex - it’s a nested list - but the help is pretty explanatory. Split the offset curve at all the parameters found Now, get all the split segments and see if the any of the original circle centers lies on them. If so, delete the segment, otherwise keep it. The leftover segments will be your offset curve with gaps It sounds complex - actually it is a bit - but it does work. There are some cheap hacks though, such as using rs.Command(“Split”) then checking the results for curve length and deleting the ones that are near the circle diameter (assumes the other segments are always larger) Thanks Mitch. I know it sounds cheap, but I have actually thought about that route right after I wrote the above question, but I did not see a way to take your step 5. But I have found it now, IsPointOnCurve should do it (unsurprisingly, how could I have overlooked that…). Just reporting: it worked well along the lines that Mitch suggested, albeit slightly different. the parameters of the curve intersect points are created in ascending order, leading to using the first parameter in the list to make the first split in the original curve, and after that reassigning the second (in the list) split-off part of the curve (which is the higher parameter part of the original curve, and now has a new ID) as the target of the split with the second intersection parameter. by the same token the first (in the list) part of the second split result sits inside the circle, so you can delete that to create the gap. No need to search for the line part with the original circle center. avoid placing your cutting circle too close to a curve end or start point, the above procedure will not work reliably anymore (and neither will the procedure where you delete the bit with the circle center on it). Even writing about it confuses me, it took me a while to get my head around it. Not so much the intersect function, as Mitch said it is pretty well explained in the help. But it works fine now. I agree with you in general, but in this case the result is immediately visible to the user, and I have tested it on multiple curves in the development process, so I will leave it for now. More testing ahead anyway. I was wondering why your description is more comprehensive than mine, I found out it’s because you are using the Rhino 6 syntax list, and I use the Rhino 5 equivalent (as I am not upgrading to R.6). But the R.5 list does mention splitting by multiple parameters, although it does not say how…(and I am wondering what the curly brace is doing in the syntax?)
OPCFW_CODE
It's certainly going to be an interesting one to watch. Up until recently I believe that there was a large amount of ignorance regarding browser options within the "normal surfer" community. With the mainstream attention in the media and now newletters from non-IT sources (like Tedsters doctor contact) there will be more attention put on the likes of Mozilla and Opera. The question is will people read and change browser or jsut read and continue as they are? <afterthought> If we begin to see a significant shift away from IE browser towards Mozilla (and lets not forget Opera) then this may well have an impact on the way we develop sites. Many people tend to concentrate on IE (which as a business decision is not without fault - cater for the masses), but if Mozilla and Opera start to come to the fore new techniques that have been avoided may now get more airing... </afterthought> Well, it sure is happening, and with a bit of rush, and it has thrown my processes into disarray. I was pleased a few years ago when Netscape died (pretty much). It halved my workload basically. I have now switched to Firefox after spending several days cleansing my toxified system from IE incursions, and realised once again, I have to do double the work to publish any site. Not happy really. I hated MS owning the browser marketplace to some extent, but damn it simplified the process for developers. I even had deliberate poor coding on my own site so that if anyone browsed it from Netscape (4.xx) it looked like crap and they wouldn't ask me to work on their site. I know some of you will despise me for that, and I'm sorry, but NS4 was an awful thing and like game developers explaining why it makes no economic sense to develop ports of their games for Mac/*nix environments, I can only offer the same explanation. But now, here I am spending hours pouring through forums trying to find out how to minimise the painfull differences in Moz5/IE6 page rendering. And can I charge my clients any more? Not easily. This may sound stupid but it's the way I develop: work to standards first (which pretty much guarentees Gecko/Opera/Safari) and then debug for IE6. At first glance in looks backwards as IE6 is your biggest client, but I find it a lot quicker. Robin_reala, that's exactly what I do - and others here as well. It is a much faster development cycle. I used to be fearful of standards mode and always write to quirks mode. But after I became fluent with standards (not as bad a lerning curve as I thought) and then had a few of the main IE issues understood, I found I could generate templates and pages the fastest I ever had. When I see a healthcare newsletter giving their clients advice to switch away from IE, however, it's not about the author's development cycle. He really cares about people and he sees the security issues as important to his clients' overall well being.
OPCFW_CODE
inf printer driver driver for usb cable samsung download canonЧитать дальше net framework 3 5 sp2 skyrim all dlc armor download for xp to netЧитать дальше How to Edit DLL Files. This wikiHow Doing so will save your modified DLL file. If you wish to change the DLL file's name, you'll click the "AS" version. Hi My Friends I want to know How can I change an Exe Version Information (File Description , Cooment etc) via Command line ? I need a very small. hi all, Sorry if it is not the right place to ask this question. I need to modify the source of a dll file and recompile after modifying it. (The dll is having. Specify a NET Framework Version for an Application Pool (IIS 7) by editing configuration files directly, To change the NET Framework version. How do I set the version information for an existing exe, dll? I need to set the version information change/remove version key strings; adjust. DLL-Downloads.com has an excellent tutorial that demonstrates how to install DLL files in How to install a DLL file in Windows? [DLL-Downloads.com. Good morning to you all! I'm specifying "Specific version"=false on all my added assembly references but when I try to deploy the solution with an assembly. DLL-files Fixer Premium Full Version, DLL-files Fixer Any Version Crack, DLL-files Fixer Patch, DLL-files Fixer Licence Keys, DLL-files Fixer All Versions Patch. electron / rcedit. Code. Issues 12. Pull requests 1. Use this option to change any supported properties, $ rcedit " path-to-exe-or-dll "--set-file. The plugin will change the AssemblyVersion of all files named Specify the version you want to change adding the Change assembly version in build. End DLL Hell with NET Version Control and Code Sharing. Windows Folder \Microsoft.NET\Framework\ version number of a component in the mydll.dll.Hi skuanet, Solution 1. We can just use the following command line to modify the dll file's version information without using csc.exe compile. The simplified web application paradigm that is called Plan 9 MVC is released more frequently than ASP.NET. However, beginning with version net.dll: 4.0.30319.233. System.Diagnostics FileVersionInfo Class. A file version number is a 64-bit number that files such as exe or dll files; text files do not have version. ASP.NET 4 Breaking Changes. 02 Configuration File. In previous versions of ASP.NET, to the ASP.NET 4 version of System.Web.dll that automatically. Resource Editor allows you to modify the Version properties of the executable files and edit file version information on Windows Vista EXE or DLL files. Every Windows executable binary file (EXE, DLL, etc) has a version custom auto-incrementing page and change the Assembly and File Version. Simple MSBuild Configuration: Updating Assemblies file which can replace assembly version DLL produced will have the same version. DLL-files Client will solve all your DLL errors! Read about our new product offering. As part of our build process I need to set the version How do I set the version information for an existing exe/dll files. It will only change. Aug 13, 2008 · Hi all I need to change the version from a DLL. My (noob) question is: where can i change the version, on the code? If my program was in C language instead. Edit ConnectionString in dll.config file. into visual studios and change and recompile it. ?xml version="1.0 the ASP.NET Configuration.May 29, 2003 · I recommend that file versions change with Perhaps the problems we experienced then have been corrected in NET 1.1 and VS.NET abc.dll Version. I recommend that file versions change with each build. When to Change File/Assembly Versions on the default VS.Net Assembly Info files. What is msgHook.dll? MsgHook.dll is a Windows DLL file. DLL is the abbreviation for Dynamic Link Library. DLL files are needed by programs or web browser extensions. Checking DLL version in PowerShell is extremely easy. Here is the script that does it for you. Note that the real code fits in a single line (lines 17-18. Simple Version Resource Tool for Windows. below modifies file version information of foo.dll in one simple a manifest from a pre-.NET. I was playing around with my computer and i change a dll file to a notepad! How to change back a dll file back to normal. In the following table, you can find a list of programs that can open files with dll extension.This list is created by collecting extension information reported. Check Version of Assembly or DLL – It’s just 2 MB file you can iOS 8.4/8.3 and also for the lower versions. Did you know that VB.NET simplifies the process of obtaining file version information? Irina Medvinskaya shows you how to utilize the FileVersionInfo class. Jan 31, 2011 · Hi all, is it possible (with some tool ) to change the file version for a dll without recompile it ? Regards. ASP.Net site on IIS 7.5 not taking new version of 3rd party DLL. box when I change between 2.1.0 and 2.2.0 of the DLL it Temporary ASP.NET Files.Change EXE File Properties. Open the version info for editing in Resource Editor. Edit, update or delete Version Information about a file, such as version number. Applications use assembly version information to determine which DLL to bind to. Do a File version change. Net Assembly Vs. File Versions (11) deepak wrote:. There are few things more frustrating than trying to make other peoples’ code work; broken references, missing dependencies, extraneous and useless files. This article describes how to change NET Framework versions used by the a newer version of a DLL is used first file depends on which version. A free tool to change file version resource File version info editor allows you to edit the version (a running executable file or a loaded. The FileDate Changer utility doesn't require any installation process as long as you have the newer version of comctl32.dll: version Change Files. Assembly version vs. File version. "is this the right dll". The file version is not really used so but it leaves the assembly version for manual planned change. I'd actually be fine with a way to change the Assembly File Version Change Assembly Version in a in the file version of the compiled. Change EXE File Properties. Open the version info for editing in Resource Editor. Edit, update or delete Version Information about a file, such as version number. What is a DLL? A DLL is a library and the tax rates change Private DLLs use version-specific information or an empty local file to enforce. How to change dll version. The version number can be changed in the file "AssemblyInfo.vb". The Ability to change ASP.NET version is disabled.Hi My Friends I want to know How can I change an Exe Version Information (File Description , Cooment etc) via Command line ? I need a very small. Version Info manipulations on compile-time. (I had to change version variables Retrieved. I m working in ASP.NET 2005. After publishing my web site getting all dlls in bin folder with file version 0.0.0.0 How can I change the file version. Check Version of Assembly or DLL version of Showbox App download for all android smart phones and tablets. – It’s. Feb 27, 2004 · how do you figure out the assembly version and file (VB.NET) Dim version As System for version change. Having a single. Download a file. To save a file or Executable files (.exe, dll, bat): If you trust the file, click Change and select where you'd like your files to be saved. How can I extract version information for a DLL or other type of file I Determine the Version Number of a File? machine then change the value. This page offer the best solution for you how to edit DLL files. Try It Free; C#, Delphi, Visual Basic, Visual Basic.Net, Restoring corrupt files; Change. Determine DLL version number. The following class retrieves DLL versions. Class header file: 1 Obtaining Computer Information with Visual Basic.NET. Upgrading to Microsoft NET Managing Versions of an final executable or dll file created should change minor version numbers. Building a dll – When to Change File and Assembly Versions. Versioning of assemblies in NET can be a confusing prospect given that there are currently at least.
OPCFW_CODE
At Code Club it’s our mission to get as many young people interested in coding as possible. Here Dan Powell, Programme Manager at Code Club, talks about his experience with inspiring girls (including his daughters) to get coding. When I was 12, my parents bought me a Sinclair ZX81, and ever since then I’ve been interested in computers, moving on to an Acorn Electron and getting a B in my Computer Studies O Level. I didn’t pursue a career as a programmer, and got an arts degree instead. Yet after a few twists and turns, I became a digital sound artist, and so I find myself writing Python now and again. Doing that, alongside my work at Code Club, means that coding still plays a part in my day-to-day life. Why am I telling you this in a blog about getting girls into coding? Because I have two daughters, and I want them to feel the same way I do about the possibilities that coding offers: even if you’re not a professional coder, it’s still incredibly useful to know how to write some code. However, I’m very aware that despite everyone’s best efforts, including here at Code Club, there still aren’t enough girls going on to a career in programming. So what can I do about that? How can I help keep my girls engaged, and do my bit to redress the gender balance in the industry? As well as working at the Raspberry Pi Foundation, I also volunteer most weeks at the Code Club at my daughter’s primary school. The club was started by Wendy Armstrong, a freelance Full Stack .Net & Mobile Developer who shares my passion for getting girls into coding. Female role models There are a couple of things that we have done at my Code Club to try and engage girls. First of all, having an amazing female developer like Wendy is incredibly powerful. She is a huge inspiration to the girls who come to our club, proving to them that coding is not just for boys. One thing I know from working for Code Club is that girls often drop out when other clubs come along. That doesn’t really happen at our club, and I’m certain that it is because we have a great role model for them. So if you’re a female programmer, and even if you don’t code for a living, please think about volunteering at a local Code Club. The second thing we do is to reserve half of the spaces in our club for girls, and when Wendy does an assembly at the start of the school year, she actively encourages girls to apply. These means our gender balance is 50/50, which is where we want it to be. We think there are fewer opportunities for girls to get into coding, and we believe that prioritising girls at our Code Code goes some way towards addressing this. One piece of evidence I can offer is my eldest daughter — she loves Code Club, and she even runs drop-in Python workshops at the Raspberry Jam I help organise. I asked her just now if she’d be as into coding if it wasn’t for Code Club and having Wendy as a role model. She said she wouldn’t be, and that’s all the proof I need. Inspire girls to code by starting your own Code Club Do you feel passionate about getting more young people coding? By starting your own Code Club, you can make a real impact in your community. Find out more on our website www.codeclub.org.uk.
OPCFW_CODE
use petgraph::graph::NodeIndex; use rustwlc::WlcView; use super::super::LayoutTree; use super::super::core::Direction; use super::super::core::container::{Container, ContainerType, Handle, Layout}; impl LayoutTree { pub fn move_focus_recurse(&mut self, node_ix: NodeIndex, direction: Direction) -> Result<NodeIndex, ()> { match self.tree[node_ix].get_type() { ContainerType::View | ContainerType::Container => { /* continue */ }, _ => return Err(()) } let parent_ix = self.tree.parent_of(node_ix) .expect("Active ix had no parent"); match self.tree[parent_ix] { Container::Container { layout, .. } => { match (layout, direction) { (Layout::Horizontal, Direction::Left) | (Layout::Horizontal, Direction::Right) | (Layout::Vertical, Direction::Up) | (Layout::Vertical, Direction::Down) => { let siblings = self.tree.children_of(parent_ix); let cur_index = siblings.iter().position(|node| { *node == node_ix }).expect("Could not find self in parent"); let maybe_new_index = match direction { Direction::Right | Direction::Down => { cur_index.checked_add(1) } Direction::Left | Direction::Up => { cur_index.checked_sub(1) } }; if maybe_new_index.is_some() && maybe_new_index.unwrap() < siblings.len() { // There is a sibling to move to. let new_index = maybe_new_index.unwrap(); let new_active_ix = siblings[new_index]; match self.tree[new_active_ix].get_type() { ContainerType::Container => { // Get the first view we can find in the container let first_view = self.tree.descendant_of_type(new_active_ix, ContainerType::View) .expect("Could not find view in ancestor sibling container"); trace!("Moving to different view {:?} in container {:?}", self.tree[first_view], self.tree[new_active_ix]); return Ok(first_view); }, ContainerType::View => { trace!("Moving to other view {:?}", self.tree[new_active_ix]); return Ok(new_active_ix) }, _ => unreachable!() }; } }, _ => { /* We are moving out of siblings, recurse */ } } } Container::Workspace { .. } => { return Err(()); } _ => unreachable!() } let parent_ix = self.tree.parent_of(node_ix) .expect("Node had no parent"); return self.move_focus_recurse(parent_ix, direction); } /// Updates the current active container to be the next container or view /// to focus on after the previous view/container was moved/removed. /// /// A new view will tried to be set, starting with the children of the /// parent node. If a view cannot be found there, it starts climbing the /// tree until either a view is found or the workspace is (in which case /// it set the active container to the root container of the workspace) pub fn focus_on_next_container(&mut self, mut parent_ix: NodeIndex) { while self.tree.node_type(parent_ix) .expect("Node not part of the tree") != ContainerType::Workspace { if let Some(view_ix) = self.tree.descendant_of_type_right(parent_ix, ContainerType::View) { match self.tree[view_ix] .get_handle().expect("view had no handle") { Handle::View(view) => view.focus(), _ => panic!("View had an output handle") } trace!("Active container set to view at {:?}", view_ix); let id = self.tree[view_ix].get_id(); self.set_active_container(id) .expect("Could not set active container"); return; } parent_ix = self.tree.ancestor_of_type(parent_ix, ContainerType::Container) .unwrap_or_else(|| { self.tree.ancestor_of_type(parent_ix, ContainerType::Workspace) .expect("Container was not part of a workspace") }); } // If this is reached, parent is workspace let container_ix = self.tree.children_of(parent_ix)[0]; // set the workspace to be active match self.tree[parent_ix] { Container::Workspace {ref mut focused, .. }=> { *focused = true; }, _ => unreachable!() } let root_c_children = self.tree.children_of(container_ix); if root_c_children.len() > 0 { let new_active_ix = self.tree.descendant_of_type(root_c_children[0], ContainerType::View) .unwrap_or(root_c_children[0]); let id = self.tree[new_active_ix].get_id(); self.set_active_container(id) .expect("Could not set active container"); match self.tree[new_active_ix] { Container::View { ref handle, .. } => handle.focus(), _ => {} }; return; } trace!("Active container set to container {:?}", container_ix); let id = self.tree[container_ix].get_id(); self.set_active_container(id) .expect("Could not set active container"); // Update focus to new container self.get_active_container().map(|con| match *con { Container::View { ref handle, .. } => handle.focus(), Container::Container { .. } => WlcView::root().focus(), _ => panic!("Active container not view or container!") }); } }
STACK_EDU
transfer code from one server to other server I wanted to transfer new code into my new production server. I have code files on my development server. Instead of uploading files using FTP from my local machine, there is other way to transfer code from one server to other. What I am thinking I will make zip file of whole code to be transfer and place it in webroot. So that it would be accessible in internet with some link http://www.mydomain.com/code.tar.gz now on the other server i will just run command wget http://www.mydomain.com/code.tar.gz Will this transfer done in few seconds...? May I know is this correct approach? Heck no. http://superuser.com/q/214277/26316 More information is needed. Are these servers that are hosted by you, by your company, by a hosting provider. What OS are you working with? What sort of access do you have to the servers? @mrdenny I have complete access on both servers. Two votes to move this to Super User? Why? Even if it was off-topic, why send it there of all places? @Kamlesh What about my other questions? @John yeah, SU didn't really seam like the right place to send this. Question appears on topic to me. Might need to go to SO later, but so far it seems to me to be on topic. First thing of note is that FTP is not a good idea. You should definitely be using SCP. Next thing is that where you are creating files, you want to do so with the correct permissions. The easiest way to do this is as root user (then you can create the files as any user you like). But you really don't want to allow root scp/ftp access. So that means you pull the files onto the server - not push them. I'd recommend building an release on your development system (so you can check it has deployed correctly) then using rsync to clone the image on to the server. You could use scp to move a backup image across - but you probably need to be root to unpack it correctly. Howver if you get problems then the only recourse you've got is to repeat the whole process again - rsync only copies the files which have changed. The best thing really to do will be to use ANT or a build script to export from your CVS/SVN/GIT/whatever, with a particular tag, so that the next time you upgrade the code, if something goes wrong, you can always move back to the original codebase. Failing that, use rsync from a clean development environment. Make sure the code is owned by and running as the same user, then do this: ssh devserver cd /path/to/webroot rsync -e ssh -avzP * prodserver:/path/to/webroot/ As your first and clean deployment in your new production server, if you have ssh access, use RSYNC or SCP. Check this out: Using Rsync and SSH As Glen said, the best deploy plan is to use ANT or build your own script to export your code, if you have any problems during deployment, you can rollback.
STACK_EXCHANGE
Ctrl+Shift+Enter does NOT insert a new line/paragraph before table in MS Word I have seen this question posted on multiple forums: How do I insert a new line or paragraph BEFORE an existing table in MS Word? An example of such a Question is located here: How do I insert text above a table at the top of a Word document? The answer is most usually as follows: Place your cursor in the top-left cell prior to any text. Press Ctrl+Shift+Enter. Sounds very simple and seems to work for many. But for many others, like me, it did not. When Ctrl+Shift+Enter is depressed, NOTHING happens. I am using Microsoft Word for Microsoft 365. See the answer that worked for me below. This would have been better posted as an answer in the earlier thread. Are you using the Desktop version? Operating System? If you are on a Mac, you would use Cmd+Shift+Enter. https://support.microsoft.com/en-us/office/keyboard-shortcuts-in-word-95ef89dd-7142-4b50-afb2-f762f663ceb2?redirectsourcepath=%252fen-us%252farticle%252fkeyboard-shortcuts-in-word-for-mac-3256d48a-7967-475d-be81-a6e3e1284b25&ui=en-us&rs=en-us&ad=us#PickTab=macOS Charles Kenyon... I wanted to post the answer, but it would not let me. I am a new user... Not enough rep. So there's that. Rich Michaels, I am not making this too difficult. Ctrl+Shift+Enter did NOT work, because the keyboard shortcut was NOT there. So I posted what I had to do to get it back. I am not the only one this has happened to. The only idea I have for why this happens is that maybe some other software altered the keyboard shortcut. I don't know, but it was gone, now it's not. Note that the leftside CTRL andthe right side CTRL are different keys, and might behave differently. Try both sides! I think I have found what may be a solution for some: When Ctrl+Shift+Enter is depressed, by default, this is effectively inserting a Column Break. If nothing is happening for you, you need to check if your keyboard shortcut for inserting a Column Break is active/correct: Go to Quick Access Toolbar and select "More Commands." Quick Access Toolbar Screenshot Select "Customize Ribbon." Keyboard Shortcuts Screenshot Select "Keyboard shortcuts: Customize." Scroll down and select "All Commands." Customize Keyboard Screenshot Under "Commands," scroll down and select "InsertColumnBreak." When I checked my shortcut assignment in the "Current keys" field, nothing was there... But if you have something there, you should be able to use whatever keyboard shortcut is in the field to insert the column break (line before the table). Try it out. If it still doesn't work, I don't know what else to tell you. However, if there is no shortcut assigned (field is blank), or you want to change the assigned shortcut, continue on: Place your cursor in the "Press new shortcut key" field. It seems Ctrl+Shift+Enter is considered the default keyboard shortcut for the "InsertColumnBreak" function. Of course you may use whatever key combo you want. Just be careful you don't use one that is already assigned to another function, unless you just don't care to lose that function. Knowing this, depress Ctrl+Shift+Enter (shows up as Ctrl+Shift+Return in the field), or your other chosen key combo, to record your new keyboard shortcut. Click "Assign." Enjoy! The other method Ctrl+Shift+Enter is likely to work and does not require a QAT modification. Ctrl+Shiftt+Enter is the keyboard shortcut to insert a Column break in the Windows Version of Word. On the Mac it is Cmd+Shift+Return. Anyone finding this should first look at the page you linked in your question.https://superuser.com/questions/175177/how-do-i-insert-text-above-a-table-at-the-top-of-a-word-document#:%7E:text=If%20your%20table%20is%20positioned,new%20line%20before%20the%20table Charles Kenyon, thanks, yes, I know all of that. What I am saying is that Ctrl+Shift+Enter did NOT work, because the keyboard shortcut was NOT there. So I posted what I had to do to get it back. I am not the only one this has happened to. The only idea I have for why this happens is that maybe some other software altered the keyboard shortcut. I don't know, but it was gone, now it's not. When you have a problem like this in the future, try starting Word in safe mode as a diagnostic to check on whether other software is messing with your system. https://support.office.com/en-us/article/Open-Office-apps-in-safe-mode-on-a-Windows-PC-dedf944a-5f4b-4afb-a453-528af4f7ac72
STACK_EXCHANGE
Meaning of nani in Japanese Definition of nani Words related to nani Have I missed anything so far? (exp) what's what I can't make heads or tails of this assignment. (adv) something; for some reason Groaning strangely she is hurling her overflowing passion onto the canvas! (adv) (not) one (usu. in neg. phrases) He denies nothing to his children. (adv) quite; really; very; extremely I just don't know what to say. - nothing; not a bit; not at all (n) what; something; everything You should do your best in everything. - nothing (with neg. verb) - something or other; unspecified matter - (adv) uneventfully; without incident; without a hitch; peacefully (n) who; what kind of person So I want to explain who these people are. Somebody had drowned her in the bathtub. (adv) one way or another I am inconvenienced when my wife is away. Above all, I want to be healthy. nothing (with neg. verb) Were it not for water, nothing could live. (exp) is there anything else? Can I bring you anything else? (adv) all; all together; unanimously We are solidly behind you. - (n) wisdom and magnanimity; witty intelligence and large-mindedness - measurement; volume and weight - (adj-na) placid; composed; serene; calm - big hearted; broad-minded; magnanimous - great quantity (of something) - (n) the open sky and the serene sea; as open as the sky and serene as the sea; magnanimous His foolish proposal was approved unanimously. - (n) naniwabushi; var. of sung narrative popular during the Edo period (adj-i) casual; unconcerned; nonchalant A casual remark can hurt someone. (adv) somehow or other; for some reason or another Somehow or other I found his house. - (adj-na) largeheartedness; liberality; catholicity; generousness; generosity; magnanimity (n) cargo; freight Your shipment should be delivered within twenty four hours. (exp) don't know what's what I didn't know what was what. The inside of my head had gone to panic mode and I couldn't get things straight. (adv) somehow or other; something or other Everybody has some faults. (exp) anything and everything; from top to toe; from A to Z The party was not altogether pleasant. - (adj-na) of the old feeling of naniwa-bushi; marked by the dual themes of obligation and compassion that distinguish the naniwa-bushi ballads - (n) the large number of bridges over canals and rivers in Naniwa (present-day Osaka) (exp) on the least pretext; at the drop of a hat The teacher pokes his nose into everything. - (exp) Where there is a will, there is a way - (exp) what would be a good way to do it? - (n-suf) month (of the year); used after number or question word (e.g. nan or nani) (exp) at any rate; in any case; at the very least; if nothing else At any rate, that it had ended without serious incident was a small mercy. (exp) no one at all; no one, who he or she may be No citizen should be deprived of his rights. - (exp) what more can one say?; to be utterly at a loss for words (because of disbelief or disgust) (adv) inadvertantly; for no special reason You were just listening to the talk, without thinking. - truthfully; unexpectedly - after realizing; without knowing (n) what language What languages do they speak in Canada? (exp) what is it (that); the meaning of something; what something is about What makes one person a genius and another person a fool?
OPCFW_CODE
Q: How do I Create a Continuous Playback DVD ? Answer: To do this you will need to use Sony DVD Architect Studio or DVD Architect Pro. Most of the magic in DVD Architect is controlled by the Properties Box in the top right corner of the desktop. So I will walk you through what to do with a series of screen shots from the program. I will assume that you already have made a project in Vegas Movie Studio or Vegas Pro, ready to be turned into a DVD. If you haven't got a clue how to use DVD Architect, then at the bottom of this page I will list five tutorials I have already made explaining the basics. 1. Import Your Media Boot up DVD Architect to get started. Right-click on Untitled and select Insert Media, then navigate through your computer and select your video. Repeat process if you have more than one video to import. 2. Create a Playlist Right click on the blue Menu 1 screen and select Insert Playlist. On the next screen press Select All to group all your videos together as a playlist and then press OK. Your screen should now look like this in the centre. 3. Setting the Playlist Button Properties At this stage you can right-click on the Playlist 1 box and then select Button Style to change the look - Text Only, Image Only or Text & Image. Now press on top of the Playlist 1 button so that it is highlighted and then press Action in the Button Properties box, which is located in the top right corner of desktop. Set Auto Activate to equal Yes. 4. Setting the Menu Screen Properties Go back to the Menu 1 blue screen and press anywhere outside of the Playlist 1 button area. This will now change the Properties box (top right corner) to say Menu Page Properties. Now press End Action. Next select Command and set mode to Activate button. Select Timeout then move your mouse pointer to far right of time value and press the triangle to open up the slider. You are now going to set the time down to the lowest value possible of 00:00:01.000 Final setting to change is Button. Set this to Link - Playlist 1. You are now finished with all the settings to make a DVD playback continuously. Next you can press Preview to check that everything is working properly before you burn a disc. If you need more help with understanding the basics of DVD Architect, then please watch my older tutorials below. Some of the methods I show in the "How to make a Basic DVD" videos, I have now updated. An example of that is how I Imported the videos into the project that I just showed you. The method you have just learnt is the best way to import video if you have already made videos with Sony Vegas Movie Studio. How to make a Basic DVD Part 1 How to make a Basic DVD Part 2 How to make a DVD with Menu's Part 1 How to make a DVD with Menu's Part 2 How to Add Chapter Points to a DVD
OPCFW_CODE
Recommended read: PIPEFAIL: How a missing shell option slowed Cloudflare down https://blog.cloudflare.com/pipefail-how-a-missing-shell-option-slowed-cloudflare-down/ I've recently been doing similar with some of my utilities, albeit with an informal comparison between Ruby and Go versions, but would agree that for large, production critical scrips, this is a great way to do it Recommended read: Rewriting Bash scripts in Go using black box testing https://stackoverflow.blog/2022/03/09/rewriting-bash-scripts-in-go-using-black-box-testing/ The Bash logo was one of my proudest non-code open source contributions. So cool to see it on popular videos and articles. 😎 Justin Dorfman (@jdorfman)Sun, 27 Feb 2022 01:43 GMT Recommended read: shell - What is the difference between the Bash operators [[ vs [ vs ( vs ((? - Unix & Linux Stack Exchange https://unix.stackexchange.com/questions/306111/what-is-the-difference-between-the-bash-operators-vs-vs-vs The hardest problem in computer science is escaping a quotation mark in a bash string. Lorin Hochstein (@norootcause)Mon, 17 May 2021 15:32 +0000 Recommended read: Escaping strings in Bash using !:q | Simon Willison’s TILs https://til.simonwillison.net/til/til/bash_escaping-a-string.md TIL that you can use the "DEBUG" trap to step through a bash script line by line 🔎Julia Evans🔍 (@b0rk)Sat, 03 Oct 2020 15:24 +0000 Recommended read: Take care editing bash scripts https://thomask.sdf.org/blog/2019/11/09/take-care-editing-bash-scripts.html Surrounding a bash command with () will not persist directory changes So instead of cd ios && pod install && cd .. You can do: (cd ios && pod install) Pretty neat! Kadi Kraman (@kadikraman)Sat, 04 Apr 2020 14:59 +0000 I used to write a lot of shell scripts before realising that what I was trying to do was treat shell scripting as a "full" scripting language (I won't define here what I mean by "full"). Its not - reach for a higher level scripting language like Ruby or Python when things are getting more complicated, and allow shell scripts to glue things together, or be for quick tasks maybe a few lines long. When you do write them, this advice is great but it's definitely worth gaining understanding of when you should and shouldn't use them. Recommended read: Anybody can write good bash (with a little effort) https://blog.yossarian.net/2020/01/23/Anybody-can-write-good-bash-with-a-little-effort Recommended read: Bash $* and $@ https://eklitzke.org/bash-$*-and-$@ Recommended read: Things You Didn't Know About GNU Readline https://twobithistory.org/2019/08/22/readline.html Automating Promotion of Jekyll Posts from Draft to Post (2 mins read). The handy script I've created to automate publishing a draft in Jekyll, with handy Zsh + Bash autocomplete. DevOpsDays London 2018 (51 mins read). My writeup of my first DevOpsDays conference, and the awesome talks and conversations I was part of. Extracting SSL/TLS Certificate Chains Using OpenSSL (1 mins read). A quick one-liner to get you the full certificate chain in You're currently viewing page 1 of 1, of 16 posts.
OPCFW_CODE
Differential equations application problem I am studying differential equations, and I saw this interesting problem in another question (here): A destroyer is hunting a submarine in a dense fog. The fog lifts for a moment, discloses the submarine on the surface 3 miles away, and immediately descends. The speed of the destroyer is twice that of the submarine, and it is known that the latter will at once dive and depart at full speed in a straight course of unknown direction. What path should the destroyer follow to be certain of passing directly over the submarine? The problem gives a hint: establish a polar coordinate system with the origin at the point where the submarine was sighted. I honestly have no inkling as to how you can solve this problem. I am thinking the path must be some sort of spiral around the submarine's location (pursuit curve?) but I'm not sure. That sounds like the right idea, although I don't know how to actually do that formally. Maybe this will help: http://www.math.cornell.edu/~numb3rs/blanco/Spree.html Is the "full speed" of the submarine a known number? @Ian No, it isn't, only that it is half that of the destroyer. That's actually quite important to the problem, and is enough to use to solve it. Specifically it means that if the destroyer initially moves radially inward, then its radial position (for a while) will be $3-2vt$ while the submarine's radial position will be $vt$ forever. The destroyer should follow this course until they are equal i.e. until the destroyer has moved 1 mile, then it should follow the spiral path as in grdgfgr's answer. time $t$, velocity $v$, submarine subscript $s$, destroyer subscript $d$ At any given time, sub will be $r_s=tv$ away from the origin at a constant angle $\theta _0$ Destroyer will need to match this radial distance, so $r_d=tv$. We need to find $\theta _d (t)$. Vectoral velocity of the destroyer in polar coordinates is $\bar v_d = r_d'\hat r+r_d \theta _d '\hat \theta$, where $^$ denotes unit vector. The magnitude of $|\bar v_d|=2v=\sqrt{ r_d'^2+r_d^2 \theta _d '^2}$ This gives us $$\theta _d '= \pm \frac{\sqrt{3}}{t}$$ The $\pm$ makes sense because we can choose to wrap around from any direction we want. The rest of the question can be completed with initial conditions, velocity, etc. Example: Lets assume the destroyer starts at $(3,0)$ with velocity $2$ and the sub at $(0,0)$ with velocity $1$ (Cartesian). The destroyer first directly moves to $(1,0)$, where it will meet the destroyer in best cast scenario. At that point, he will be on the $r_d = vt$ position and he will follow the path in the image: I'm not sure this is quite right. Why is it enough for the radial velocity of the destroyer to be the same as the radial velocity of the submarine, when the destroyer starts out at $r_d=3$ and the submarine starts out at $r_s=0$? I would think that the destroyer needs to get to $r_s$ first, and then follow the spiral path that you're describing. Alternately the destroyer could simply wait for $r_s$ to be $3$ and then follow the path you're describing. Yes, the destroyer should first arrive at 1 possible position before starting the path described above. @Ian I have given an example. I'd appreciate if you have any input. How did you obtain the vector velocity of the destroyer? @EsX_Raptor That was explained. Which part do you not understand.
STACK_EXCHANGE
Why don't popular programs like Photoshop or World of Warcraft support Ubuntu? The only thing stopping me from making Ubuntu 12.04 my main OS is that a lot of programs I use on Windows 7 and Mac aren't available for download on Ubuntu. Why don't popular programs like Photoshop or World of Warcraft support Linux? Also, would a company have to create the program (eg: Photoshop) for every possible distro (Gentoo, Arch Linux, Ubuntu, etc...), or would they just have to make one binary and it would work across all distros? I think market share is very important aspect, because producing complicated softwares (like Photoshop) or game (like Warcraft) are too expensive for software production companies and when market share of an operating system (Like Ubuntu) is low then the sell of this kind of software will be low therefore produce this king of software (expensive and complicated) is not beneficial for software production companies. For the most part, distribution doesn't matter a great deal. However, the nice thing about the open source community is that in those certain situations where it does matter, everyday Linux users are happy to take over the management of software packages in order to ensure a program they enjoy using is stable on their distribution of choice. When you're dealing with closed source software, though, this becomes harder. In this situation, the producing company holds the responsibility for ensuring the stability of their software on all platforms, and that is quite a big project to dedicate a workforce to, especially when it's only for the benefit of a minority of users. I would also like to add, that those closed source software company's don't see the potential in Linux. They think Linux OS's won't succeed, and that they aren't advanced enough to have their programs on them. Adobe, the creators of Photoshop, have even dropped support for Adobe Air on Linux Computers. If you want a program that is a lot like Photoshop, I would suggest getting GIMP. It's available in the Software Center, or you can install it VIA Terminal by following this tutorial: http://howtoubuntu.org/how-to-install-gimp-2-8-in-ubuntu-12-04/ Ubuntu is a very high built open source flavor, many of the gamers of today use windows for their "goto gamer gooru" Ubuntu is mainly meant for software development and network debugging and setup. However Ubuntu is capable of having artistic measures such as gimp 2.4 as in a Basic rundown version of Photoshop, but not made by Adobe. Back to the gaming case there is not really a whole lot you can do besides run a designated "wine program" which is compatible with the wine runner. Anyways if your a gamer gooru then your going to have to face the inevitable that Ubuntu or any other Linux Distro cannot run many games today that are meant for a dedicated graphics card and or a dedicated computer which is ported with graphics capable of running any game at maximum settings. art Graphics and Photo wise you can depend on Gimp, but its not a reliable as Photoshop. P.S no the company wont have to make a "ex Photoshop, that would be one discombobulated bunch!
STACK_EXCHANGE
There are a number of sheet reasons why I think this is one of the best pet snakes. Ball Python Care Sheet. Care Sheet written by: Joel Bortz. Leopard Gecko sheet Ball Python , Crested Gecko others. I have always loved the boids ( boa constrictors pythons anacondas). My first care real experience with a snake was with a burmese python for a long time I thought that all snakes were that large! Ball pythons are one of the most popular snakes kept in captivity worldwide. Ball Pythons: Selection sheet Care, Breeding. The record age for a ball python is more than 40 years – so plan on a long life for your new pet sheet snake. I’ ve got to be honest with you; ball pythons are one of my all- time favorite snakes one that I still maintain sheet to this ball day. Home › Ball Python Care Sheet. Growing to a maximum ball size of 3 to 5 feet are quite docile, , ball pythons are not as large as many of the other constricting snakes that are kept as pets are easy to handle. They can be a bit more finicky with their eating. Genetic Wizard - Calculate odds and results of your breedings. From sheet this page you can find resources on every aspect of husbandry care for ball pythons very popular pet snakes. BY: STEPHANIE - BHB REPTILES. A reader asked the question: How often can I handle my ball python without causing it too much stress? I' ve put the ball python last on my list care of best types of snakes to keep as pets sheet for one reason only. View products for animals or categories listed below. Not all animals currently have a care sheet care however we are in the care process of adding them. When new acquaintances of mine hear that sheet I keep pythons in my bedroom they often imagine one of the larger pythons – perhaps a burmese or reticulated python that they’ ve seen in the zoo. Ball Python Life Span. Welcome to THE ROYAL PYTHON. Ball care python sheet snake. Ball care python sheet snake. Here' s our response. GoHerping includes as many arguments options as possible, so you can decide what' s best, give your python superior husbandry. Welcome to the complete care guide. sheet A 5- foot ball python is considered big although care lengths of 6 feet more have been reported. uk is a website built with Royal Python keepers in mind with the aim ball of helping both Royal Python enthusiasts beginners who are thinking about keeping a Royal Python as a pet. The ball python is a good snake for a beginning snake owner. Discover how you can make the care perfect reptile or snake ball enclosure for a fraction of the cost of custom snake cages. Pet Snake # 4 - The Ball Python. With proper care ball pythons can live 30 years more. uk you can find information on keeping Royal Pythons. The Royal Python. Guaranteed ways to save money make fantastic snake care , have fun other reptile cages with 10 simple steps that make it so easy anyone can learn how to. sheet Melissa Kaplan' s Herp Care Collection Last updated January 1 Red- Tail Boa Care © 1995 1997 Melissa Kaplan. There are many reasons for their popularity. Tokay Gecko Care Sheet Tokay Gecko Scientific sheet name: Gekko gecko Size: 8- 12 inches Life Expectancy: 15- 20 years Introduction to Tokay Geckos Tokay Make sure to sheet clean up immediately if your care snake has urinated or defecated. On The Royal Python. Gecko Care and Reptile care information. Ball Pythons are one of the most popular snakes being kept and bred in captivity. Care Sheet for Ball Python - Python Regius. Learn everything you need to know to care for your pet Ball python. All the Ball python info you' ll need to become a master of Ball pythons. Expert care tips for the western hognose snake. Ball pythons are among the most popular pet snakes. They are good beginner snakes because they are docile and easy to care. Housing for a ball python can vary from simple to elaborate. ball care python sheet snake If it is unavoidable, be sure to thoroughly disinfect the area. See the Feeding Frozen/ Thawed Foods Care Sheet for more information. If feeding your snake live rodents, do not leave them unattended.
OPCFW_CODE
I just finished reading Lean In by Cheryl Sandberg and loved it. In particular, I loved her discussion of how she scaled back her hours. She talks about how she went from being in the office twelve hours a day to being in the office for a more typical 8 hours after the birth of her first child. To do this, she had elaborate methods for hiding the fact that she was working a shorter day so that she wouldn’t appear less devoted. This is an all too common thing to hear, particularly in technology where we have somehow conflated the amount of time spent working with productivity. After fifteen years of working in a number of environments, I can tell you that the two measures are strongly correlated for me, but probably not how you think. When I first started working for startups, I bought into the hacker culture. I worked long hours and expected to be called at all hours of the night to deal with problems. It didn’t take long before I started to feel burned out. In fact, within a few short months I was less productive at work and less happy at home. When I stopped to look at what I was really doing with my time, I realized that most of my time was spent reacting to issues that were caused by preventable problems. Over the next month, I focused on working fewer hours, but also on doing work that would reduce the number of times I was called in the middle of the night. Before long, I was working a more normal hour and getting more productive work done. It turned out I was busy because I spent all of my time fighting fires instead of just stopping them from getting started in the first place. A few years later, I went to work for a very large company. Like the startup, most people there worked much more than the typical nine to five day. It didn’t take long to figure out that this had a different cause. Meetings. At this company, people scheduled meeting for everything. Even worse, they invited more people than needed to attend. The end result was that more than half of every day was spent in meetings that should have been email chains. To overcome all the time spent in meetings, people would either multi-task in the meeting, thereby making the meeting even less useful, or would work extra hours. After a few years of this I gave up and just started declining most meeting invitations. In particular, I declined almost every meeting that didn’t have a published agenda that was relevant to me. I also declined any meeting with more than five people in it. This turned out to be incredibly liberating. Initially I was afraid that I would miss out on learning about topics that weren’t directly important to me now but that might be in the future. It turned out that the meeting summaries that were sent out via email afterwards were a great way for me to keep up with these areas without spending hours a day on it. Of course, not all meetings sent out summaries afterwards. These meetings typically were also run without an agenda and were rarely worth attending. They were almost always made up of rambling tangents. Fast forward a few years and I had started Elevated Code. When I talk to potential clients, I am very up front with my work hours. I’m in the office from 8:30am until 5pm and don’t work nights or weekends. I don’t even check my email outside of those hours. If my clients really need something urgently, they can call me. Otherwise, it can wait until morning. Some clients are surprised by this, but very few have a problem with it. In fact, most are amazed at how much I get done during these hours. The secret is that working more hours isn’t a substitute for knowing what to do and doing it well. In fact, most of the time when I see people working long hours, it is due to either poor planning, poor organization, or poor discipline. For me, it’s also not sustainable. I might be able to get a small boost of productivity by working more hours for a week or two, but my effectiveness declines quickly over time. I applaud Cheryl for talking about how she limited her work hours. More importantly, I encourage everyone to do it. By having a better work life balance I am not only much happier, but I’m also much more productive during my time at work.
OPCFW_CODE
The creative writing party game. Roll dice, find gems, make money. Posted on 9/2/2017 by Tim Rice Status: Prototype (playtesters wanted!) Hello! I've finally finished another prototype that I'm happy with, and I'm excited to share it with everyone. Some readers might recognize this game from a previous blog post I wrote about my abandoned prototypes. Well, I ended up revisiting that design, giving it a major revamp, and fixing a lot of the problems that frustrated me before. It was a great experience, and it just goes to show that it's a good idea to review old ideas every once in a while. I think it turned out really well, but I need your help to improve it even more! Dig Deep is an economic dice game where players lead mining operations. The goal is to secure the largest profit by finding the most valuable gems, upgrading your tools, and manipulating the market. In addition to its core mechanics of dice rolling and resource management, the game has light area control, engine building, and commodity speculation mechanics as well. These are some of the game's main features: As far as complexity goes, my goal was to design it as a medium-light weight strategy game. Obviously it's tough to judge my own design in this regard, but I think it turned out a bit heavier than I was expecting. I still think most families wouldn't have much trouble understanding it after a few turns, but I'll admit that there are a few tricky parts (especially investing), and it is possible to have multiple difficult decisions per turn. What I like about this game most, and what I think makes it unique, is how it implements a dynamic economic system in an easily digestible package. Being able to hold onto gems until the market improves, rushing to mine gems of a certain type before their price goes down, and manipulating the market to suit your current needs all appear naturally from these mechanics. I think it's pretty cool that it does all that in a short timeframe. If you're interested in trying it out, I included links to the full instructions below, as well as the components file with assembly instructions. If you do play it, I would be eternally grateful if you (and anyone you played with) filled out this survey. It asks some basic questions about your experiences with the game, which will help me make it the best it can be. I'll do my best to credit you as a playtester if the game is ever published. Finally, if you're interested in publishing this game, email me at firstname.lastname@example.org. Thanks for reading, and if you decide to help me test my game then I'm even more grateful. I hope you guys enjoy it.
OPCFW_CODE