There are actually many possible reasons why this might have happened.
First, 12 hours is a lot more realistic for patching 600 systems than 2 hours is, so I think it might help to have some additional background on how you normally patch 600 systems in 2 hours.
Second, the mix of updates in a patch cycle can significantly impact the amount of time it takes any given machine to install patches. Of notable impact, .NET updates can significantly increase the total installation time.
If you review the Details tab of the Task History item for this scheduled update deployment, you'll find there is a column named "Completion Time". Sorting on this value might provide some insight as to what was running when. Within that 12-hour window, how many systems were patched in what time frames. For example, it's entirely possible (although quite unlikely) that a single system (or a few systems) could have hung the task and caused the task to "run" for 12 hours, but in reality, 99% of the systems were actually patched in the first few hours. Another possibility is that a network link was down and an entire subnet or site might have been inaccessible, contributing to a delay. It's also not impossible that the "12 hour" Completion Time is a completely false indication -- all of the clients may well have been patched in the first few hours, but the *task* did not properly terminate.
Also inspect the "Server Executed On" column. If you're patching 600 systems in 2 hours, then it's quite likely that you're using multiple Automation Role servers. Check to see that all of your Automation Role servers were actually in-use during the execution of the installation task.
It's also possible that resources on one or more Automation Servers were not available at that time, which could result in the Automation Role server working the task at less than 100% efficiency. It's also possible that another task was running concurrently, using up the resources that would have been used by the update installation task.
Another variable that can impact total task execution time is the quantity of targeted systems that are inaccessible or non-existent. The task execution engine has a notable amount of retry effort built into establishing a WMI connection to a target system, and even on a perfectly working client, building a WMI connection is a fairly expensive (read: time-consuming) task. Waiting on a thread to timeout trying to connect to a system that's not available keeps that thread from connecting to a system that is available.
The Task History\Details is definitely the place to start in order to get a more detailed perspective of what happened during the time the task was executed.
Finally, you can export the Task History\Details to an Excel workbook, and I'd be happy to take a look at the task execution and offer my thoughts.