Thursday, May 5, 2011

How to show command line build options in Visual C++ 2008

In MSVC 2008 project, building a project will display following information in output window:

1>------ Build started: Project: Project1, Configuration: Debug Win32 ------
1>Compiling...
1>main.cpp
1>test1.cpp
1>test2.cpp
1>Generating Code...
1>Linking...
1>LINK : test.exe not found or not built by the last incremental link; performing full link
1>Project1- 0 error(s), 0 warning(s)
========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========

How get I get a output like this

*cl.exe /Od /I "includepath" /D "_UNICODE" /FD /EHsc /RTC1 /MDd /Zc:wchar_t- /Fo"Debug\" /Fd"Debug\vc90.pdb" /nologo /c /ZI /TP /errorReport:prompt*
....

Like C# project will do .

Thanks.

From stackoverflow
  • You can view the options passed to cl via Project Options -> C/C++ -> Command Line. I am working on a German version of VS2005, so I hope my translations do map to the English Visual Studio.

  • This is controlled via the "Suppress startup banner" setting in the "General" sub-tab of the "C/C++" tab in the project's property pages. If you set it to "No", it will show in the Output window the command line being used during compilation.

    welemon : Thanks Guido, this is what I want. thx
  • Switch on build logging (Tools -> Options -> Projects and Solutions -> VC++ Project Settings -> Build Logging). You should then get a build log (BuildLog.htm) in your intermediate files directory which contains all the information you need, including error messages. You will also get a ctrl-clickable link in the output window to display the build log.

Capturing keystrokes

If I am working in an application and I press key from keyboard, how can I capture that key (or string), including the source application's name, using C#?

From stackoverflow
  • You could override the ProcessCmdKey function of the control you want to capture the input from.

    protected override bool ProcessCmdKey(ref Message msg, Keys keyData)
    {
        var msgFormat = "Key Pressed: [{0}] {1}";
        var appName = System.AppDomain.CurrentDomain.FriendlyName;
        var logMsg = String.Format(msgFormat, appName, keyData.ToString());
        // write msg to log
        return base.ProcessCmdKey(ref msg, keyData);
    }
    

    Hope that helps.

  • There is a good keylogger example on codeproject.com. I hope this helps.

  • You need to setup windows hooks to call your callback function for any keyboard event. You can use SetWindowsHookEx to set up windows hooks.

    Check this article on MSDN blog to read more about it.

    http://blogs.msdn.com/toub/archive/2006/05/03/589423.aspx

What's current best practice to prevent email injection attacks in PHP?

What's considered the best practice these days for sanitizing data from a PHP email form?

I'm currently using something like this...

$msg = $_POST['msg'];
$email = filter_var($_POST['email'], FILTER_VALIDATE_EMAIL);
$name = $_POST['name'];

$subject = "Message from the MY_WEBSITE website e-mail system";
$message = "From: " . $name . "\n";
$message .= "Email: " . $email . "\n\n";
$message .= $msg;
$headers = "From: " . $email . "\r\n" .
           "Reply-To: " . $email . "\r\n" .
           "X-Mailer: PHP/" . phpversion();

$mailSuccess = mail("me@example.com", $subject, $message, $headers);

Is it sufficient protection to simply filter the email field in this fashion? Can/should I harden the script more effectively to protect against spammers?

Thanks in advance!

[EDIT]Clarification, since the answers so far suggest that I've not explained myself well.

I'm not principally concerned with spambots getting hold of this script, but with anyone utilizing it to send illicit emails to any address other than me@example.com. This might include a bot, but could equally be a human defeating a CAPTCHA test.

What I'm looking for is PHP that will ensure that the email sent by the mail() method is not hijacked. This is probably a regex or filter or similar that simply strips certain characters. Thanks again.[/EDIT]

From stackoverflow
  • I would do this:

    • Use CAPTCHA;
    • Fail to send if the subject or body includes any HTML tags whatsoever. Note: I didn't say strip them out. Just don't send the email and give an error message to the user why. There's no point sending yourself a filtered spam message. Just don't send it;
    • strip out any high or low characters (filter_vars() can do this);
    • limit the message to, say, 4000 characters (or some other appropriate limit that you pick);
    • fail if the message contains any URL that doesn't point to the current site;
    • arguably use some of the techniques from How do you stop scripters from slamming your website hundreds of times a second? to ensure there is a human sending the message.
    Wikiup : > Best practice (imho) is not to send emails to email addresses supplied from an HTML form. Just to be clear on this point, the provided code doesn't -- or at least shouldn't -- send email to anyone other than the site owner. That address is hard-coded into the PHP (me@example.com). What I want to ensure is that no one hijacks the script, as with the classic \ncc:victim@victim.com to spam others. Thanks for the link! :c)
    cletus : Sorry you're right: I misread it. Fixed now.

Can I install 2008 SQLExpress alongside 2005 SQLExpress?

Can I have an instance of 2005 SQLExpress and a 2008 SQLExpress server running on my machine?

Both to be accessed using the 2008 management console. I currently have the express tools installed, but my office recently purchased a full 2008 license. Does this come with an improved console? And if so, can I just install these tools or do I have to un-install the express console first?

From stackoverflow
  • What a funny coincidence, I just did right that 30 minutes ago. So far, no errors. Also, the protocols and services coexist with one another. You have to specify an identifier for the new installation which has to differ from the old SQLExpress installation. I chose to name the new instance "SQLExpress2008", which is now listed alongside the previous instance "SQLExpress".

    Everything seems fine, though I have to admit I am an amateur when it comes to Microsoft SQLExpress. Hope that answers your question, if you have any specific concerns I could have a look at them.

  • You can have both instances co-exist on the same server. They would have different instances name and would use different ports (so if you have configured your firewall to allow access to port 1433 you would need to add more IPs) but it should work.

  • It ought to work, at least in theory. Although the question arises -- why install both, given that 2008 should be able to do everything that 2005 does?

    littlechris : I do work for more than one customer, one of which is using 2005 and has no plans to move. I have a new customer with no SQL Server yet, and I though I might as well install the latest.
  • Yeah 2005 and 2008 coexist just fine. just remember to set the ram usage.

    also note that visual studios 2005 and 2008 can be on the same machine as well.

    you can also open up a 2005 db in 2008 and vice versa (there are some issues, but i don't know what they are)

How to prevent git merge to merge a specific file from trunk into a branch and vice versa

Hi, I am using git while developing VHDL code. I am doing development on a component in a git branch: comp_dev. The component interface does not change, just the code inside the component. Now, this component already exists in the master branch, but in a more stable version, enough for other developers to be able to use the component. The other developers also have branches for their work, and when their code is good they merge their branches back to master.

At this stage I need to be able to merge all the changes from master back to my comp_dev branch, which is basically no problem, but sometimes the stable version of the component I am working on do change as a part of other designers work, but not the interface. I have to do manual git merge -s ours on that particular file every time I want to merge, otherwise I get a conflict that I need to solve manually, throwing out their work.

The same happens if I want to merge changes in other files back to master. If I forget to do git merge -s ours src/rx/state_machine.vhd comp_dev before I do a git merge, then I end up with either a manual merge, or I accidentally merge an unstable version of the state machine on top of the stable one.

Is there a way to temporarily exclude one file from merges?

From stackoverflow
  • If I understand correctly, you want to defer having to merge changes to the said component (let's name it 'C') while the focus of your work is on some other module. A side effect of your work are minor changes to 'C', which happen to conflict with other people's work, but you don't want the hassle of also merging 'C' every time you push your focus work to wherever your 'master' is.

    AFAIK, a change set in git is atomic and does not know about files; so there is no way to exclude a file from merge, short of resolving the merge conflict in favor of the revision you prefer.

    There may be another way out of your situation though.

    You probably want to factor 'C' out into a separate library, and have a separate git repository for it. Your project will be split into multiple repositories. But fear not, git will let you manage this through submodules. Check out http://www.kernel.org/pub/software/scm/git/docs/git-submodule.html for the details on how to do so.

    Submodules will allow you to check out a given revision of 'C', and focus your work on a different part of the source. You can then edit, commit, and merge your work independently of the changes anyone has made to 'C'.

    Concerning the management of concurrent changes, the usual stance with open-source version control is that VC is not a substitute for communicating with the team members. Agree on the general development approach, minimize concurrent incompatible changes, and the development process will become less of a pain.

  • I've been chatting this up a bit with some friends, and I thought I'd share in case you find it useful.

    rebase and merge might not be too useful for what you're trying to do. A safer, easier, boring, and otherwise more predictable approach to only fetching certain bits of code or certain files would be using git-provided methods for manually moving patches, like (a) cherry picking individual commits or (b) format-patch and am. If you need to tweak the result (like deleting a file), do so and explain why in a new commit. Or just tweak stuff while you're cherry picking or applying patches. am can be --interactive, and a cherry-pick can be modified with a commit --amend.

    I tried another tack with a long-standing branch: merge everything, then manually revert the stuff I really didn't want merged. This worked fine too.

    Something else that seems like a great idea is using fine-grained branches.

    I guess I feel like one key take-home message is that paying attention to the code and having good automated tests that are run frequently is more important than mastering a particular git patch/merge strategy.

  • I posted a solution that works for me here

Sql collation issues when deploying live.

History: I had a deployment the other night which involved executing a script that creates a stored procedure. However when running the script, an error occurred about the collation of the DB.

The Problem: With the help of our friend Google, I managed to find out that the problem I was having was because on our live environment's db has one collation and the temporary DB's collation is different, and I was getting an error when trying to insert into a variable table.

The Question: What is the best way forward, or my options?

Environment: sql 2000

Thanks in advance.

From stackoverflow
  • The obvious solution is to reinstall your development database with the same collation as the production database.

    If you can't, for example if you have multiple production databases with different collations, you can make sure your updates either fail or succeed as a whole. Put a transaction around all the stored procedure updates, and make sure to use alter, not drop/create.

  • We have been running into this one with temporary databases that we use for integration tests, in relation to development and production databases. What we have done is to make sure that all databases use the same collation. In your case, you should probably be able to do a simple search/replace in the script file and simply change the collation to the one used in the target database.

    Edit: changing in the script will get you forward now; in the longer run you will want to make sure to use the same collation in all related databases.

  • There are some solutions

    • Make sure all your collate are matching in the same db. Else you'll be in for some pain.
    • The right thing to do if your temp tables don't contain many data is to recreate them with the same collate as the dest tables.
    • If that's not an option, look for string types, and force your collate to the destination db one

       INSERT INTO dest_db..DEST_TABLE ( DEST_VARCHAR_COLUMN )
       SELECT TMP_VARCHAR_COLUMN COLLATE <dest_db_collate>
       FROM tmp_db..#TEMP_TABLE
      

Design - how do i handle events on persisted objects as they change state?

Maybe a silly question this, but I'm a rookie developer.

Lets say, in a data-driven CRM'ish application, I have a customer type that can go through a number of phases - i.e. phases 1 - 5.

As a customer changes phase - events should trigger. Example - as a customer enters phase 3 from phase 2, an email is sent, some lists are updated and some calculations are performed.

I'm imagining that a customer changing state could be the result of a user of the application manually updating the customer through some graphic interface.

So I'm wondering - should I handle this by asserting that there's only one way to update the phase state of the customer, and then insure that whenever that action completes, a list of actions are carried out?

In my mind (and scenario) this would mean retrieving a customer from a relational database, updating a phase field, persisting the customer back down, and then always reacting to this action by firing off whatever actions are registered as dependent on that particular phase change. However I'm not sure this would be smart if I wanted to do a batch phase change of 10.000 customers.

Any thoughts at all to this? I'm really just looking for any kind of input - assume that I'm completely clueless.

From stackoverflow
  • In many cases it's better to have a explicit function in you business logic to change the phase. This should be the only way to change the phase, and it's explicit. Like "ChangeCustomerPhase(Customer customer, Phase newPhase). This makes it much simpler to handle that to track changes if everything can be changed freely.

  • I think that's OK to have separate functions, one for a single customer phase change and another for a batch change. The latter would or wouldn't carry out the additional actions, as needed. It could also do it in a more efficient way, or even enqueue the actions, or part of them, like sending e-mails, for background processing, if the additional actions are lengthy and it is required to complete the phase change in a timely manner.

    Another issue pops up when the phase change is due to the arising of some, possibly complex, conditions, rather than due to a manual phase change. Then you should hook a condition check somewhere in your business logic, low enough to catch all update operations affecting the phase. But as you wrote, that's not the case since in your situation the phase change is issued manually.

SOAP with Attachment (SwA) in C#

Hi all,

I need to use .NET in order to consume a JAVA written SOAP service which expects simple MIME attachments on some of its method.

Does anybody know how to accomplish it? I could not find any information about using WCF or even WSE clients with such attachments.

Thanks!

From stackoverflow

Upgrading to VS 2008 Professional from Web Developer Edition

I currently have VS 2008 Web Developer SP1 installed on my machine and I've purchased the professional edition. I also have 2003 and 2005 Pro installed.

Should I un-install VS 2008 Web Developer before installing the professional edition? If not, will I have two version of VS 2008 or will Professional "Upgrade" Web Developer.

Please note I haven't bought an upgrade license, I have the full license for professional.

From stackoverflow
  • Web Developer is a separate product to VS2008, and the install shouldn't change it.

    As an aside - with the multi-targetting in VS2008, you might want to save some space by uninstalling VS2005? VS2008 can't target 1.1, though (except maybe via MSBEE) - so maybe keep VS2003 if you still maintain 1.1 code. Obviously, if you do this: first make sure that VS2008 supports any legacy requirements you need... (i.e. don't blame me if you can't edit your project!)

    I would probably uninstall Web Developer first, but you might find it handy for small test projects? I keep C# Express installed for the same reason (I find it quicker for small scratch tests).

How do you configure GroovyConsole so I don't have to import libraries at startup?

I have a groovy script that uses a third party library. Each time I open the application and attempt to run my script I have to import the proper library.

I would like to be able to open GroovyConsole and run my application without having to import the library.

From stackoverflow
  • At least on Linux groovy GroovyConsole is a Script has the Following command:

    startGroovy groovy.ui.Console "$@"
    

    startGroovy itself is a script which starts Java. Within the startGroovy script you should be able to modify your classpath and add the missing librarys.

    From startGroovy:

    startGroovy ( ) {
        CLASS=$1
        shift
        # Start the Profiler or the JVM
        if $useprofiler ; then
            runProfiler
        else
            exec "$JAVACMD" $JAVA_OPTS \
                -classpath "$STARTER_CLASSPATH" \
                -Dscript.name="$SCRIPT_PATH" \
                -Dprogram.name="$PROGNAME" \
                -Dgroovy.starter.conf="$GROOVY_CONF" \
                -Dgroovy.home="$GROOVY_HOME" \
                -Dtools.jar="$TOOLS_JAR" \
                $STARTER_MAIN_CLASS \
                --main $CLASS \
                --conf "$GROOVY_CONF" \
                --classpath "$CP" \
                "$@"
        fi
    

What is excatly the technology stack defining Web APIs?

What are the abstract levels of technology that are used to build a Web APIs such as GData?

From stackoverflow
  • Web services typically use one of two technologies.

    So called REST APIs make use of the HTTP GET,PUT, DELETE etc to create a web service. For instance a search engine could give an API where you search for by doing an HTTP GET request from www.searchengine.apple which returns all results for apple. The technology building blocks are the existing basic web protocols that you are familiar with, HTTP, HTML, and server side, PHP, ASP etc.

    Alternately you can use SOAP and WSDL to define an XML based API based around defined requests which can take a number of parameters, which will return a response. Many languages can import the WSDL file for a service and provide a functional API based on it. This makes accessing the service very easy.

    For more reading on WSDL and SOAP I recommend www.w3cschools.com

    There are pros and cons of both methods which I will leave you to explore.

    Regards

  • It's worth mentioning that GDATA itself is largely the Atom Publishing Protocol (RFC 5023) + extensions (as allowed by the spec),and with a proprietary authentication method.

    So, that's an abstraction over REST that adds semantics for publishing and editing a collection of objects.

implement RC5 encryption in java

I've an assignment currently to implement this RC5 encryption in my java project but I have no much idea how to achieve it.

I tried to create Cipher object as below line and it throw me exception:

Cipher desCipher = Cipher.getInstance("RC5");
***java.security.NoSuchAlgorithmException: Cannot find any provider supporting RC5***

but other provider like "DES", "RC2" working fine when I create Cipher object,

fyi, I'm using jdk6.

From stackoverflow
  • http://bouncycastle.org/ I believe have a RC5 implementation use them as your provider and see if that works.

    Dickson : Thanks Paul... it helps

IE textarea width changes when user types some text

I have a page in which there are a few textarea and <input type="text"> html controls. These elements are placed in a div tag:

alt text

In IE 7 after the user types some text in textarea, the width of the textarea changes on its own: alt text

and now again if the user types some text in the textbox, the textarea agian returns to its original width: alt text

Has anyone faced any similar issues?

From stackoverflow
  • Set a width in px, for the textfields. Should resolve the expanding behaviour.

    dalbaeb : This does, indeed, work.
    Peter Perháč : wish there was a better way though. it works but... ahhhh the aggravation, making things work in ie6

What are you using to script deployments for complex server products?

What are you using to script deployments for complex server products?

Imagine we've got some slick new code (or maybe version 42 of slick code) and have ship it. Now, of course we have the basics in place here - databases build and upgrade themselves. We have nice "packages" (zip files) But there is a lot in between "shiny new server" and "OK, now hit the go button" - from service accts, to 3rd party software, to fonts.

We're an MS shop, by the way: IIS, ASP.NET, MSSQL. We're mostly deploying our own servers. As the product gets bigger and scales horizontally, updating a lot of servers is getting heavy.

Enlighten me as to how to do this easily and well with answer to any of the following ...

  • What's your favorite solution(s) to server deployment?
  • How often do you fall back to manual steps (add this x to IIS)?
  • Does anyone really use the Website Setup projects in Visual Studio?
  • Is it cheaper to buy an intern than script it?
  • If you script it, what are you using - and did you like it? (Someone here just made a compelling installer out of MSBUILD. After .NET - installs, it's always there.)
  • Have you learned to love WMI? How?
  • Have you achieved deployment transcendance. A mere glance at the shiny new server deploys the code and all its dependencies? How?
From stackoverflow
  • We use NANT for the uploading of new versions to production. We have modified it to our needs and it works ok. We hardly ever need to make manual changes except for changes to web.config which we do manually. We upload a new version on a weekly basis and in some cases we upload daily. We have the option to roll back to previous version, we can select which servers to upload to, we upload different branches to different servers, everything that we needed to do we were able to build on top of it.

    I recommend writing some upload script. This is the best way you can make sure that you don't forget anything, that you can easily roll back and that you know exactly what's going on on your servers.

Sorting a list with paths by deepest directory

Hello.

I need to sort a list that contains paths (relative or absolute) so that the deepest path appears first, for example:

\New Folder\Item1\tools\1
\New Folder\Item1\tools
\New Folder\Item1
\New Folder
etc...

Is there an API in Path class I can use to do it?

Thanks! J.

From stackoverflow
  • This is a bit out-of-the-box, but you could always do this:

    var sortedList = list.OrderByDescending(
        p => p.Count(c => c == Path.DirectorySeparatorChar
            || c == Path.AltDirectorySeparatorChar));
    

    That is, simply order by how often the path separator character appears.

    Cerebrus : This is how I would do it, too!
    Pasi Savolainen : This requires that all paths are absolute. Which is a good requirement since one never knows where the relative paths have been to.
    Matt Hamilton : Yeah, that's true - you may need to call Path.GetFullPath on each string and provide it with the "root" path that you know they all map to.
    hmemcpy : That works, but it would jumble the results if there is more than one root directory in the list.
  • I assume those paths are strings, so why not just sort them in descending order?

    var paths = new List<string>
    {
      "\\New Folder",
      "\\New Folder\\tools",
      "\\Windows",
      "\\Windows\\System32",
      "\\New Folder\\tools\\1",
    };
    
    var result = paths.OrderByDescending(s => s);
    

    Or if they are in a string[] you can use:

    Array.Sort(paths);
    Array.Reverse(paths);
    

    Result is:

    \Windows\System32
    \Windows
    \New Folder\tools\1
    \New Folder\tools
    \New Folder

Is Clearcase recursive labelling more efficient than element by element labelling?

I use Clearcase on a project with ~3700 files. While making a release, there are some directories that have to be labeled recursively. The file count here is ~2400. The files in other directories are selectively labelled. I currently label the files using a script to iterate through a list of files and label the files one by one. This takes around 20 mins while connected to Ethernet.

Is it efficient to split the labeling process into two parts one for recursive labelling and one for selective labeling? Is recursive labeling faster by 1x,2x,..?

From stackoverflow
  • ClearCase operations file by file are always... slow!

    You need to apply your label recursively if you can (that is if all the files of a given tree need labeling).
    It is also recommended to do that operation on a dynamic view, in order to avoid any side effect related to an update status of a snapshot view (not updated, or update in progress)

    The result will be faster (I do not have an exact estimation, but at least 2x seems right).

    Warning, your directory from which you apply recursively the label must be in the right version (i.e. the version selected by the config spec).


    Do not forget that the point of labeling is to identify a coherent set of file (i.e. a set of file which evolves and is labeled as a all). That means "mklabel -rec" is always better than putting a label on a single file.
    A recursive label does not miss any file, a label put on files from a list can result in an incomplete set (for instance, if the list of files to label is obsolete or incomplete)

  • Why don't you use the Apply Label tool? That's what we do anyway.

Security (aka Permissions) and Lucene - How ? Should it be done ?

First some background to my question.

  • Individual entities can have read Permissions.
  • If a user fails a read permission check they cant see that instance.

The probelm relates to introducing Lucene and performing a search which simply returns a list of matching entity instances. My code would then need to filter entities one by one. This approach is extremely inefficient as the situation exists that a user may only be able to see a small minority and checking many to return a few is less than ideal.

What approaches or how would developers solve this problem - keeping in mind that indexing and searches are performed using Lucene ?

EDIT

Definitions

  • A User may belong to many Groups.
  • A Role may have many Groups - these can change.
  • A Permission has a Role - (indirection).
  • X can have a read Permission.
  • It is possible for the definition of a Role to change at any time.

Indexing

  • Adding the set of Groups (expanding a Permmission) at index time may result in the definition becoming out of sync when the list of member groups for a Role change.
  • I am hoping to avoid having to reindex X whenever the definition of a Permission/Role changes.

Security Check

  • To pass a Permission check a User must belong to a group that is within the set of groups belong to the Role for a given Permission.
From stackoverflow
  • It depends on the number of different security groups that are relevant in your context and how the security applies to your indexed data.

    We had a similar issue which we solved the following way: When indexing we added the allowed groups to the document and when searching we added a boolean query with the groups the user was a member of. That performed well in our scenario.

    Manne : Probably we did something very similar: the indexed objects have groups id:s with read privileges in a metadata field: "G1, G6, G203," and searches are just targeted with "contains G1 or G70". Also extends to users by using a different prefix.
  • It depends on your security model. If permissions are simple - say you have three classes of documents - It is probably best to build a separate Lucene index per class, and merge the results when a user can see more than one class. The Solr security Wiki suggests something similar to HakonB's suggestion - adding user's credentials to the query and searching by them. See also this discussion in the Lucene user group. Another strategy will be to wrap the Lucene search with a separate security class that does additional filtering out of Lucene. It may be faster if you can do this using a database for the permissions.

    Edit: I see you have a rather complex permission system. Your basic design choice is whether to implement it inside Lucene or outside Lucene. My advice is to use Lucene as a search engine (its primary strength) and use another system/application for security. If you choose to use Lucene for security anyway, I suggest you learn Lucene Filters well, and use a bitset filter in order to filter a query's results. It does have the problems you listed of having to keep the permissions updated.

    mP : I will add more detail to my original q...
    Yuval F : Added detail to my original answer. HTH

Firebird - UTF8 VarChar size

I am changing all varchar columns in our firebird database to UTF8 however I don't understand the difference in varchar size.

For example, with the charset and collation set to nothing, we can set the varchar size to 255, if we set the charset and collation to UTF8, when we set the varchar to 255, it reads different values.

What would be the equivalent varchar size for varchar(255) in UTF8?

From stackoverflow
  • Using the UTF8 character set for VARCHAR(N) fields needs to reserve enough space for any N UTF8 characters. The length of one such character may be between 1 and 4, so the only safe thing is to allow for N characters of length 4 each, meaning there needs to be space for 200 bytes to store the 50 characters (worst-case condition).

    You could use the FlameRobin tool to have a look at the internals. Let's assume you have a table

    CREATE TABLE "TableÅÄÖåäö"
    (
      "ColÅÄÖåäö" Varchar(50)
    );
    

    in a database with default character set UTF8. (Note that you need at least Firebird 2.0 for this.)

    The system tables store information about all relations and their fields. In the system table RDB$RELATION_FIELDS there is a record for this field, which has (for example) RDB$1 as the RDB$FIELD_SOURCE. Looking into RDB$FIELDS there is one record for RDB$1, and its value of RDB$FIELD_LENGTH is 200.

    So to answer your question: To have a UTF8 column with space for 255 characters you enter it as VARCHAR(255), but in the database it will have a size of 1220 bytes.

  • The VARCHAR(n) datatype contains text of varying length, up to a maximum of n characters. The maximum size is 32,767 bytes, which can be 10,992 to 32,767 characters, depending on the character size (1..3 bytes). You must supply n; there is no default to 1.

    Firebird converts from variable-length character data to fixed-length character data by adding spaces to the value in the varying column until the column reaches its maximum length n. In the reverse conversion, trailing blanks are removed from the text.

    The main advantage of using the VARCHAR(n) datatype are that it saves memory space during the execution of PSQL programs.

Batch extract rars with spaces in names

I am trying to batch extract some rar's that are in some zip in some directories. Long story short this is my loop through the rar files:

for %%r in (*.rar) do (

unrar x %%r
)

Problem is that %%r gets the wrong value. If the files name is "file name.rar" then %%r gets the value "file" - it stops at the first space in the file name.

How do I get this loop to work file files with spaces in names?

Thank you

From stackoverflow
  • unrar x "%%r"

  • The problem is that 'for' uses space as the default delimited. You can set this using the delims = xxx. look here for syntax. Or you can use ForFiles.

    Joey : cmd is also not a UNIX shell which does globbing wherever it sees an asterisk. For every rar file you will have its filename completely contained in %%r, regardless of it having spaces or not.
  • Try this:

    for /f "usebackq delims==" %i in (`dir /b *.rar`) do unrar x "%i"
    

    If you are using it in a batch file, remember you will need to double the percent signs to escape them.

    Bojan : Thank you very much.
  • %%r will contain the complete file name including spaces. It's your call to unrar which has the problem. If the file name contains spaces you have to enclose it in quotation marks, otherwise unrar won't be able to see that the two (space-separated) parameters file and name.rar are actually a single filename with a space.

    So the following will work:

    for %%r in (*.rar) do unrar "%%r"
    

    Also, if you aer curious where the problem lies, it's sometimes very helpful to simply replace the program call with echo:

    for %%r in (*.rar) do @echo %%r
    

    where you will see that %%r includes the spaces in file names and doesn't rip them apart.

    Bojan : I thought of that, but it still wouldn't work like that. It seams that %%r really had a wrong value and quotes would do much. Maybe I got something else wrong... I don't know.

Problem getting multiple selections in a listbox to export into a worksheet

Hi there, I am new to VBA and need some help. I have created an Excel Userform with labels, textboxs, command buttons, frames, option buttons etc and they all work fine. All the data entered in the userform goes into a sheet within the excel workbook with the exception of the listbox selected data. I use the RowSource property to define the source of the Listbox contents and this works fine. If I have the 'MultiSelect' property set to Single, and a single list item is chosen, the selected list item exports into the worksheet. If the 'MultiSelect' property is set to 'Multi' or 'Extended' ,then the field in the Worksheet is left blank after the final Submit button is pressed (Command button with code for exporting data into the workbook). I created coding for the command button alongside the Listbox within a frame to try and get the multiple list data to export. I know this should be straight forward but I just can't seem to get it to work for multiple selections in a listbox. Do I need a command button (alongside the listbox)? When the user presses the Ctrl button to multi select list items, what code do I need to store those selections for the export function? Vanessa

From stackoverflow
  • The way that I have got all values from a multi-select listbox before was to iterate through the list, checking whether an item is selected and if it is, adding it to an array or a delimited string.

    Is this how you are doing it?

autorun.inf: how to get drive letter?

I've added this entry into the context menue of an USB stick via autorun.inf:

[AutoRun]
shell\pageant=Activate SSH Key
shell\pageant\command=PuTTY\pageant.exe PuTTY\davids.ppk

Both PuTTY\pageant.exe and PuTTY\davids.ppk are files on the USB stick and should be picked up from there.

When I run this in a shell from the root of the stick it works as intended. But starting it from the menu it tries to load the key from C:\Windows\system32\PuTTY\davids.ppk (checked with Process Monitor).

Trying to use a simple cmd script resulted in this output:

    C:\Windows\system32>cd PuTTY
    Das System kann den angegebenen Pfad nicht finden.

    C:\Windows\system32>pageant.exe davids.ppk
    Der Befehl "pageant.exe" ist entweder falsch geschrieben oder
    konnte nicht gefunden werden.

Is there a way to get this working properly? I guess it should be able to pass the drive letter or get the explorer to use the stick as working directory, but I don't know how. Since I want to use the stick on the go, I'd rather avoid hardcoding my local drive letter.

From stackoverflow
  • It seems it reads "Path" system variable. :( You may add the drive to path but getting the Drive letter is the problem. :-(

    Update 1 : You can get the drive letter using a VB script.

    Update 2 : Yes, I think you can do that. Check this page.

    Update 3 : I tested the script. It works great.

    Dim  oDrive
    Set oFSO = WScript.CreateObject("Scripting.FileSystemObject")
    For Each oDrive In oFSO.Drives
    WScript.Echo "Drive Letter" , oDrive.DriveLetter
    WScript.Echo "Drive Type" , oDrive.DriveType
    Next
    

    Use some file existance check method to differenciate multiple USB drives.

    David Schmitt : Thank you for your answer, it prompted me to make a few clarifications in the question.
  • I think the easiest solution would be to create a batch file to do this for you. Something named activatekey.cmd like this:

    REM switch to the directory containing this script
    for %%a in (%0) do cd /D %%~da%%~pa
    
    cd PuTTY
    pageant.exe davids.ppk
    

    Place the file activatekey.cmd in your USB stick, and change the autorun.inf to be:

    [AutoRun]
    shell\pageant=Activate SSH Key
    shell\pageant\command=activatekey.cmd
    
    David Schmitt : That didn't work, see my last edit.
    scraimer : Hmm, ok - I've edited the script change the directory to be the same as the location of the activatekey.cmd file. I wonder if that will work for you? (I actually got so excited by your idea, that I've already implemented something just like it on my USB stick! Except my solution is a little more complex - allowing to safely remove the stick even after the pageant.exe has executed.)

Commandering GTKWindow

Is there a way to easily to have child GTK applications start in a GtkWindow of the parent's application choice? Or would this require changing the Gtk libraries?

From stackoverflow
  • If I understood question correctly, then you should use GtkPlug/GtkSocket for embedding one application into another.

  • There is the GtkSocket::steal() function that sounds about right, but it has been deprecated. As dmitry-vk mentioned, look into the rest of the Socket/Plug pair API to figure out how it's done nowadays.

A good example of custom log4net appender outputting text directly to a asp.net control ( custom control , Label , etc) ?!

I do have db appender and know how to get the output to a custom "logging" page ... The idea is to have a quick and dirty switch to write for example to the Response object ...

From stackoverflow
  • This really sounds like a dirty hack but you could get away with it using a MemoryAppender. Here's some sample code on how to get to the log data:

    var memoryAppender = (MemoryAppender)LogManager.GetRepository()
        .GetAppenders().Single(x => x.Name == appenderName);
    
    var events = memoryAppender.GetEvents();
    

    After picking the latest events from the appender for rendering you should probably do a

    memoryAppender.Clear();
    

How do I calculate a p-value if I have the t-statistic and d.f. (in Perl)?

Hi everyone,

I have written a Perl script that performs many one-sample t-tests. I get thousands of t-statistics with their degrees of freedom (df). I need to upgrade the script to also return their p-values (there are too many to look them up manually in a table). Is there some kind of formula I can use for this with the t-statistic and d.f as input? I hope someone can help me with this, many thanks in advance!

A.A.

From stackoverflow

Using SSIS to map hard coded values as part of a one-to-many table migration in Visual Studio.

Using SQL Server 2005 and Visual Studio 2005, I'm trying to create a SSIS package to merge data from 1 table to several other tables.

The source table does not have a several fields that the destination tables do. For example 'CreatedBy' and 'CreatedDate' fields. I would like these to be hard coded (in a sense) as part of the package import process.

The problem is not knowing what to use to facilitate this mapping. As a starting point it would be acceptable to have a hard coded '1' and GetDate() for createdBy and createdDate respectively.

The "Input and Output Properties" or "Column Mappings" tab on the "Advanced Editor for Destination" options dialog does not have any apparent support for mapping "default" values such as GetDate().

Any suggestions as how to achieve this?

From stackoverflow
  • The SSIS way to create new columns (with static values or not) is to use the "Derived Column" transformation in your dataflow, in between the source and destination.

    This enables you to specify additional columns and their values using an expression. For the current date/time, use Getdate() as the expression and set the datatype to "date (DT_DATE)". To hard code a value, double-quote it in the expression (e.g. "1") and specify the relevant data type.

    Nick Josevski : Great, thanks worked a charm.
  • Rather than using a Table as a source, how about specifying the query specifically? That way, you can statically define values as part of the source.

    e.g.

    SELECT id, fieldOne, fieldTwo, '1' AS createdBy, GetDate() AS createdDate
    FROM SourceTable
    

    I've done this exact thing recently.

    One important thing to remember is that you need to make sure your datatypes match. I had a few problems with string data types not matching up (UTF-8 and the like).

get url from web browser

how can i retrieve URL for list box from web browser which is currently running on machine. using c#

From stackoverflow

How to validate data input on a sharepoint form?

How does one verify a text field with another list's column? I am currently populating a Drop down list with a datasource and then comparing the text field with items in the dropdown using javascript. Is there a better way?

The second problem I am having is how to trigger the Validate Function.

I am aware of two custom forms for adding data to a sharepoint list. One is created using The Dataview Webpart in Sharepoint Designer and the other is created using the List Form Webpart in Sharepoint Designer.

I have a DataFormWebPart I created using Sharepoint Designer Insert Dataview ->Insert Selected Fields as New Item Form. This gives Save and Cancel buttons at the end of the form. How do I intercept the Save button event?

I found one solution but it only works with the NewForm page that has OK Cancel Buttons. http://www.codeproject.com/KB/sharepoint/Control_validation.aspx

From stackoverflow

Unable to access Excel's Application.ComAddIns property if there are no AddIns installed

This code snipped for the Windows Scripting Host displays the number of COM-AddIns currently installed into Excel.

It works fine except for when there are no COM-AddIns installed. I believe it should output a "0", but instead it raises an exception (code 800A03EC). Does anyone know why?

test.vbs

Set objExcel = CreateObject("Excel.Application")
WScript.Echo objExcel.ComAddIns.Count
From stackoverflow
  • Looks like a bug in Excel. You'll probably have to abuse VB's error handling to work around it.

    On Error Resume Next
    WScript.Echo objExcel.ComAddIns.Count
    If Err And Err.Number = 1004 Then
        WScript.Echo "No add-ins"
    End If
    On Error GoTo 0